GitHub - numtide/system-manager: Manage system config using nix on any distro

45 Likes

Thanks to the authors for making this. The first steps toward something that has been a much requested feature.

:heart:

2 Likes

Wow this looks amazing! I am specifically interested regarding the --target-host option:

systems-manager --help
System Manager -- Manage system configuration with Nix on any distro

Usage: system-manager [OPTIONS] <COMMAND>

Commands:
  activate
          Activate a given system-manager profile
  build
          Build a new system-manager profile without registering it as a nix profile
  deactivate
          Deactivate the active system-manager profile, removing all managed configuration
  generate
          Generate a new system-manager profile and register is as the active system-manager profile
  switch
          Generate a new system-manager profile and activate it
  help
          Print this message or the help of the given subcommand(s)

Options:
      --target-host <TARGET_HOST>
          The host to deploy the system-manager profile to

      --use-remote-sudo
          Invoke the remote command with sudo. Only useful in combination with --target-host

  -h, --help
          Print help (see a summary with '-h')

  -V, --version
          Print version

How does this work exactly? Does it rely upon a Nix daemon running on the target host? Or does it just ssh to it and replaces the /etc/ directory there with the one generated by system-manager? Does it take care of garbage collection as well? I’m asking because I wonder if it’s possible to use this tool to replace my imperative Arch Linux VPS installation with a declarative one of Nix, when the VPS has very low disk space and thus can’t use Nix natively.

P.S It would be nice to put the --help text in the repo’s README :).

1 Like

The Nix-Darwin of foreign Linux! And just maybe, an early ancestor of the future GRUMS. Love to see it. Thanks for posting it here!!

7 Likes

We started writing this because some of our customers want to benefit from the NixOS experience, but switching entirely to NixOS is too much to ask. This is a big project, so we have to pace ourselves.

We currently have a working POC that manages systemd units and /etc/files using the Nix module system, tested on Ubuntu. The main logic is all written in Rust and introduces a local state file, making it possible to remove everything cleanly. Note that things are very early so there is no documentation, and things might break.

In the near future, we want to:

  • add support for tmpfiles.d
  • handle service users
  • support the most common NixOS services
  • support more Linux distributions
  • write documentation and user guides.

If you want to help out, feel free to try it and report issues. If your company could benefit from such work, we are also interested in getting in contact to specifically support your use cases.

13 Likes

We’ve been doing really dirty things to mimic something like this. We’ve built nixos on the systems and had Saltstack link things from the result into the normal system.

I’ve been waiting for this since forever, great job! :slight_smile:

3 Likes

Nice! This looks really useful :+1:

Hard to predict the future, but is there an incentive to combine with NixOS modules so that modules can be used by both system-manager as well as NixOS.

4 Likes

For now we assume that the target machine already has Nix installed. We could include a bootstrap method in the future. Just like NixOS, the system closure can be built locally, then copied using nix copy, and bound to a profile in /nix/var/nix/profiles/. Except that it’s named system-manager instead of system. The biggest difference is that unlike NixOS where you can have a single /run/current-system symlink, we have to carefully weave the symlinks into the /etc folder to not disrupt the current distribution. That’s where the diffing engine and state file come in play. The exact implementation details of that diffing engine will probably change as we extend the scope of the project.

One of the goals that we have is that modules written in system-manager should be compatible with NixOS. That way, system-manager can become an on-ramp to NixOS whenever they are ready to do the full switch.

The reverse is more difficult because the namespace of NixOS is very wide. Any module might set values to any other modules. What I envision is that the list of NixOS modules that work out of the box will grow, depending on the customers or contributions that we get. Probably the most interesting ones are networked services such as nginx and postgres.

5 Likes

A module like this one (salt-master) looks like it’d be compatible with system-manager.

If the idea isn’t to be compatible with NixOS modules, would you accept modules that are just straight up ports from NixOS modules?

My thought when implementing my shabby version of this was to build the entirety of NixOS, then have some custom module that’d link from /run/current-system/etc/$name to /etc/$name, essentially a way to write nix to “include” files that should be linked, and keep state of the difference.

This looks more robust, but re-implementing NixOS modules is a lot of work.

1 Like

The idea is that system-manager itself only ships with the core modules like systemd.services, environment.etc, … Those are re-implemented in Rust, with a diffing engine; a bit like Terraform but for local system changes.

Then as a user, you would use imports = [ "${modulesPath}/path/to/my/service.nix" ]; to load whatever service from the NixOS distribution, and if you are lucky, it’s compatible :slight_smile: Of course, a lot of modules will break initially, but we can expand the surface area over time. Common services such as Nginx would also be part of our test suite to avoid regressions.

Anyways, that’s the idea for now. This is still pretty early so some of those ideas might change as we learn more about the domain.

4 Likes

While I generally liked the idea, rewriting everything from scratch in rust seems a never ending process to keep & maintain compatibility with nixos modules.

Instead what I’d recommend is that you can generate and apply generic resources from well-known generic configuration reinfocement tools like puppet & chef. Puppet allows cli access to its resources:

# Running on void-linux (runit), same cmd also works in systemd, or any other

$ sudo puppet resource service adb ensure=running
Notice: /Service[adb]/ensure: ensure changed 'stopped' to 'running'
service { 'adb':
  ensure   => 'running',
  provider => 'runit',
}

And good thing is that it can be used across systemd, runit, s6, sysv, and various other defined in https://github.com/puppetlabs/puppet/tree/main/lib/puppet/provider/service, also one-level up you can see there are more providers available for different purposes that packages, name-services, user/group, file, etc. that can be reused instead of reinventing the wheel (which will get unmaintainable in future).

Let me know what do you think about this, it’ll be quite beneficial imho.

Does puppet stop a service that is no longer defined automatically i.e. does it perform correct state transitions? This is the most complex part of service management in my experience. a simple systemctl start/stop is easy to maintain.

No.

If you change a puppet config to not include some “thing” anymore, puppet will just ignore it. So whatever state it was in, it will remain in.

Therefore it is a common idiom to do 2 deployments…

One to disable/remove the thing explicitely, and then one that doesn’t have the thing mentioned in the config anymore.

Similar is true for chef, salt, ansible, and whatnot.

An alternate approach is the “kettle” approach, which will just create a new machine from scratch for each deployment and destroy the old one(s).

1 Like

I am using system-manager project with nix package manager on my debian system ,
Tell me when you create your default.nix and add few lines for enabling service for my case suppose that I am taking openssh service .
I have installed it this service package and then add few lines like

systemd.services = {
sshd = {
enable = true;

	 wantedBy = ["multi-user.target" "system-manager.target"];
	 wants = ["network.target"];
	 after = ["network.target" "systemd-user-sessions.service"];
	 description = "SSH Daemon";
	 serviceConfig = {
	 Type = "simple";
	 ExecStart = "${lib.getBin pkgs.openssh}/bin/sshd -D -e";
ExecReload = "${lib.getBin pkgs.openssh}/bin/true";
	 Restart = "always";
	 KillMode = "process";
	 };
	
		 };
	 };

I got an error

Started sshd.service - SSH Daemon.
Feb 23 10:39:09 debian sshd[635]: Privilege separation user sshd does not exist
Feb 23 10:39:09 debian systemd[1]: sshd.service: Main process exited, code=exit>
Feb 23 10:39:09 debian systemd[1]: sshd.service: Failed with result ‘exit-code’.
Feb 23 10:39:09 debian systemd[1]: sshd.service: Scheduled restart job, restart>
Feb 23 10:39:09 debian systemd[1]: Started sshd.service - SSH Daemon.
Feb 23 10:39:09 debian sshd[658]: Privilege separation user sshd does not exist
Feb 23 10:39:09 debian systemd[1]: sshd.service: Main process exited, code=exit>
Feb 23 10:39:09 debian systemd[1]: sshd.service: Failed with result ‘exit-code’.

why it is please clear this and tell me the solution for this .

Currently, system-manager does not manage user accounts, it is up to the host distro to provide all user accounts.

SSH requires a user account called sshd that it uses for privilege separation (forked sshd processes will run as this user instead of root).

So you will need to create this user using your distros tools in order for ssh to work.

1 Like

Wonderful

This looks like something that can come in handy. I wonder about the feature parity with home manager: How do the two services compare?

Can system manager be described as a superset?

At least from the position of using it on another Linux distro.