Deploy web service as part of system configuration and stand-alone at the same time?

I am deploying a dynamic backend web service to my NixOS server.

Is there any way I can get a quicker feedback loop for deploying this web service, and retain it as part of the system configuration? I really like that it gets included when redeploying the server. But I don’t like that nix flake update updates both the service input and nixpkgs (since I run unstable). Deploying tomorrow is going to take longer for no good reason, and deploying today, even, is also going to take longer than necessary.

I’m running a systemd service behind an nginx reverse proxy.

I could perhaps look more into sandboxing the app.

Maybe there’s a way to “hotswap” the app so that a redeployment of the server will still upgrade the service, but hotswapping will do so around the system configuration? Right now my app is in a package (derivation), which makes this a little difficult, I think. Could I move the app’s assets to a mutable place on disk outside the store? E.g. on a ramdisk. I’d need to restart the systemd service as part of deploying.

Perhaps I’m ignorant of any pre-made solutions out there.

# service.nix
{ config, lib, pkgs, ... }:

with lib;

let
  cfg = config.services.birthday-rsvp;

  mkProxySite = domain: port: {
    forceSSL = true;
    enableACME = true;
    locations."/" = {
      proxyPass = "http://127.0.0.1:${toString port}";
      proxyWebsockets = true;
      extraConfig = ''
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
      '';
    };
    extraConfig = ''
      access_log /var/log/nginx/${domain}.access.log;
      error_log /var/log/nginx/${domain}.error.log;
    '';
  };
in
{
  options.services.birthday-rsvp = {
    enable = mkEnableOption "Birthday RSVP Flask application";

    package = mkOption {
      type = types.package;
      default = pkgs.callPackage ./app.nix { };
      defaultText = literalExpression "pkgs.callPackage ./app.nix { }";
      description = "The birthday-rsvp package to use.";
    };

    domain = mkOption {
      type = types.str;
      example = "birthday.example.com";
      description = "Domain name for the nginx virtual host.";
    };

    port = mkOption {
      type = types.port;
      default = 5000;
      description = "Port on which the Flask application listens.";
    };

    dataDir = mkOption {
      type = types.path;
      default = "/var/lib/birthday-rsvp";
      description = "Directory to store the SQLite database and application data.";
    };

    user = mkOption {
      type = types.str;
      default = "birthday-rsvp";
      description = "User under which the birthday-rsvp service runs.";
    };

    group = mkOption {
      type = types.str;
      default = "birthday-rsvp";
      description = "Group under which the birthday-rsvp service runs.";
    };

    secretKey = mkOption {
      type = types.str;
      default = "change-me-in-production";
      description = "Flask secret key for session management.";
    };

    baseUrl = mkOption {
      type = types.str;
      default = "https://${cfg.domain}";
      defaultText = literalExpression ''"https://$${cfg.domain}"'';
      description = "Base URL for generating invite links in the overview.";
    };
  };

  config = mkIf cfg.enable {
    # Create user and group
    users.users.${cfg.user} = {
      isSystemUser = true;
      group = cfg.group;
      home = cfg.dataDir;
      createHome = true;
      description = "Birthday RSVP service user";
    };

    users.groups.${cfg.group} = {};

    # Systemd service
    systemd.services.birthday-rsvp = {
      description = "Birthday RSVP Flask Application";
      after = [ "network.target" ];
      wantedBy = [ "multi-user.target" ];

      environment = {
        EVENTS_DB_PATH = "${cfg.dataDir}/events.db";
        FLASK_ENV = "production";
        FLASK_SECRET_KEY = cfg.secretKey;
        RSVP_BASE_URL = cfg.baseUrl;
      };

      serviceConfig = {
        Type = "simple";
        User = cfg.user;
        Group = cfg.group;
        WorkingDirectory = cfg.dataDir;
        ExecStart = "${cfg.package}/bin/birthday-rsvp-app";
        Restart = "always";
        RestartSec = 10;

        # Security settings
        NoNewPrivileges = true;
        ProtectSystem = "strict";
        ProtectHome = true;
        ReadWritePaths = [ cfg.dataDir ];
        PrivateTmp = true;
        ProtectKernelTunables = true;
        ProtectKernelModules = true;
        ProtectControlGroups = true;
      };

      preStart = ''
        # Ensure data directory exists and has correct permissions
        mkdir -p ${cfg.dataDir}
        chown ${cfg.user}:${cfg.group} ${cfg.dataDir}

        # Initialize database if it doesn't exist
        if [ ! -f ${cfg.dataDir}/events.db ]; then
          cd ${cfg.dataDir}
          ${cfg.package}/bin/birthday-rsvp-init
          chown ${cfg.user}:${cfg.group} ${cfg.dataDir}/events.db
        fi
      '';
    };

    # Nginx configuration
    services.nginx = {
      enable = true;
      virtualHosts.${cfg.domain} = mkProxySite cfg.domain cfg.port;
    };

    # Open firewall for HTTP and HTTPS
    networking.firewall.allowedTCPPorts = [ 80 443 ];
  };
}

Have you considered simply setting the input’s rev to a specific commit so this doesn’t happen?

Nevermind, I got it, you don’t want to update the whole system before deployment, just the input. Use nix flake update <inputname> instead.

Use nix flake update <inputname>

Excellent, thanks for this advice.

I am still interested in a hybrid solution, since I’d like to eventually be able to continuously deploy the service automatically without triggering a full system upgrade. If this were a static site, I could (and do) just copy the static assets to /var/www; the nginx config changes so rarely that I don’t include it in my static site deployment process.

If this weren’t managed by NixOS, I could just copy the compiled app server and restart the systemd service using two steps. But is there any precedence for deploying something that can be overwritten, but will get re-deployed when the system redeploys?

Perhaps the best example I have of something similar is my ~/.ssh/config:

# Perform the copy trick (~/.ssh/config cannot be a symlink, and requires certain permissions)
home.file.".ssh/config_source" = {
    source = ../dotfiles/ssh-config;
    onChange = ''cat ~/.ssh/config_source > ~/.ssh/config && chmod 600 ~/.ssh/config'';
};

The reason I do this is that ~/.ssh/config can’t be a symlink. But it also has the side-effect that I can modify ~/.ssh/config without conflicting with the derivation, and it’ll get overwritten when I redeploy the system. I do this for VSCodium’s config specifically for the ability to experiment with settings using the GUI, and then back-propagate changes to version control.

I realize that making arbitrary changes to the service configuration can necessitate a full system upgrade, since it could affect any other part of the system. But in the event that I’m just changing application code, or triggering a database migration, or something else that just affects the app and not the systemd service it runs inside, I thought I could achieve something similar to copying static assets or copying the source ssh config.

I wouldn’t do something like that. If you want this, write a service to deploy your service.

This is an infrastructure problem, I don’t know enough about your general environment to really recommend anything here. Maybe a service that just calls nix run <repo> instead of the binary you’re currently running is all you need. Maybe you want something more robust that doesn’t take down the service while you’re rebuilding. Maybe you want all the complexity of a custom hydra instance. Maybe you do want to create a simple OCI container that you publish to a container registry and use podman to actually update and execute the service.

But yeah, it’s pretty trivial to accomplish. I wouldn’t focus on copying anything, but still deploy things with nix. You don’t have to have your full system definition in one NixOS configuration, but if you want sub-parts of it that aren’t defined declaratively you’ll have to come up with ways to declare that fetch, update and running them should happen at runtime.

Personally, I think this is a case where I would generally fall back to OCI containers + a registry. You could (and arguably should) already split your module off into the nixosModules of the flake that contains your application rather than defining everything in your host’s configuration, but if this was an OCI container instead it’d be more or less completely isolated from details like nixpkgs versioning, and potential upstream changes to the options, ensuring that you can always independently update the host and the application. It’d neatly separate things like database configuration out of the application definition and into your podman service configuration instead.

Note that nix can build OCI containers just fine, and it probably produces better results than the various docker builders, certainly in the hands of most users. So your CD flow could simply be a workflow in your application repository which builds the container, and then a separate step that takes the resulting tarball and publishes it to your container registry.

The host you deploy to can then just have anything from a systemd timer that periodically updates and restarts your podman containers, to some kind of webhook that triggers the process.

Thanks a lot for your suggestions. I completely agree with your assessment. Deploying OCI containers and deploying images to the registry provides the dynamic part in a very standardized way. I will try and use dockerTools for building the image, and nixos-containers for running it.

nixos-containers are not OCI images, they’re a completely separate, NixOS-specific thing. It basically means creating a sysroot with a NixOS configuration and launching it with systemctl start - yes, systemd kind of has a bubblewrap-adjacent container launching subcommand.

They’re neat, and could be used to model something like this, but not standardized at all, and way more hassle for your use case than virtualisation.oci-containers, because in this case you actually want the standardization around artifact publishing.

1 Like

Wow, thanks for the heads up.

I really had my assumptions wrong about nixos-containers.

So nixos-containers provides sandboxing in a very elegant way, but not the dynamic hotswapping of images that I’m looking for.

I’ll have a look at virtualisation.oci-containers instead.

Does nixpkgs have a barebones OCI container registry service like DockerHub’s registry?

Hey!
@Docker registry: yes, you can use the dockerRegistry service
KR

1 Like