Seeking advice: "Heroku-style" push-to-deploy workflow on Raspberry Pi (No Docker, Remote Builds)

Hi everyone,

I’m running NixOS on a Raspberry Pi and I want to use it to host several small hobby projects (Node.js, Python, etc.). My goal is to achieve a “Heroku-like” workflow: push code to a repository → app automatically redeploys on the Pi.

I have a few preferences that are making this tricky to architect, and I’m looking for community wisdom on how others handle this.

My preferences:

1. No Docker: The Pi is resource-constrained, and running the Docker daemon + containers feels redundant when Nix already provides isolation and reproducibility.

2. Remote Builds: I build everything on my PC. The pi is too slow.

3. Dashboard: I’d like a simple UI to view service status and logs (looking at Cockpit, but open to ideas).

The Problem / My Confusion:

I’ve looked into tools like Coolify (too heavy, relies on Docker) and Piku (great, but no UI).

I came across `deploy-rs`, but I’m struggling to understand the workflow for rapid application iteration.

If I use `deploy-rs` to update my system flake, I have to:

1. Update the project code.

2. Update the `flake.lock` in my system repo to pull the new project commit.

3. Deploy the whole system profile.

This feels heavy for just updating one app. I’m trying to figure out if there is a way to “push” just the new application to the Pi and restart its systemd unit without doing a full OS switch every time, or if I should just accept the full system switch workflow.

I’d love to hear how your thoughts and ideas, as well as your personal experiences with deploying projects!

1 Like

I asked a similar question:

The short version is that you probably want to push a container image if you want dynamic reloading the service without doing a full system rebuild.

There are ways to run containers on NixOS that don’t involve Docker. The overhead can be quite small.

1 Like

Thanks for the reply, I took a look at the thread.

Did you end up writing your own solutions to auto deploy and stuff? Are you using some kind of dashboard as well?

To be honest, I haven’t progressed. I’m running my services using systemd which is nice except for deployment requiring a full nixos-rebuild switch --flake. It’s still on my list of things to do. I looked at a simple service, mdanialr/webhook, but because it’s such a simple service and because it exposes a dynamic service to the internet, I want to write it myself and ensure it is memory-safe and gets proper rate-limiting.

In my experience, a full system rebuild is faster than any Heroku/CloudFoundry-style container image build and deploy. You can optimise your closure to get further speed benefits.

In case it’s relevant, I’m developing a system for users that don’t necessarily understand Nix to deploy updates to their Pis in the 3D printer use case. At present they need to run a single command that obtains flake.nix/flake.lock remotely, then does a nixos-rebuild switch on the result.

Is it compilation that’s slowing things down? If so you could set up a cloud-based ARM builder / substituter.

I use this workflow for a couple of systems (with regular nixos-rebuild -–target-host but that seems essentially the same).

Updating the flake.lock is a matter of nix flake update myproject && nixos-rebuild –target-host …, I haven’t found that to be a noticeable hurdle over any other way to push. Similarly ‘deploy the whole system profile’ is smart enough not to need to rebuild/copy the whole thing, just the things that changed, so (assuming you haven’t gc’ed your build machine) that also actually seems faster than building and pushing some kind of more hermetic image.