I’m looking to upgrade my old
13.x, which I haven’t been updating since that version because I hit a brick wall at the time with migration. That version is now becoming untenable, so I might as well bite the sour apple and migrate all. It is used daily by a small team in production, so donwtime should be limited, though if need be it might be feasible to “patch” about 1 day’s work difference or hold off on pushes for about 1 day.
It should be hosted on a bare-metal server (
nixos) on-premises, and will be exposed through
bunkerweb running on another host. There’s about 50 repos, about 5k issues, most repos have a CI pipeline, and about 1/4 has an associated generic package registry, so for migration
git --mirror is not an option.
Hoping for some advice on the following:
- Is there experience with the maturity of the
- Is indeed the only way to migrate by first setting up the same version and restoring it, and then on the new install upgrade?
- what would you recommend for the installation mode? (
VM (the one recommended by
I used Gitlab NixOS for about 15 business users on a VM for 4 years. Worked well and upgrades were okay. You occasionally need to update the postgres version, which requires some somewhat manual steps, but it was mostly painless and no more than 30 min of downtime.
You mean a VM running
NixOS and using the module
Would it be necessary (and feasible), if I were to use the module, to first use an old
nixpkgs instance (or overlay) to do backup/restore on version
13 and then do the upgrade(s) on the new system after that? (Given the
gitlab backup/restore migration method must be on the same version according to the docs…)
Ideally you use a version of Nixpkgs that has the version of postgres you are upgrading from and to. I would also set the stateVersion to match the NixOS version that matches your postgres version to start. There are general instructions for migrating postgres on the NixOS manual.
Thanks, I hope it’s mostly the postgres version that’s the reason for the warning, and not (significantly) the
gitlab database schema as well, since that would spell trouble for
13.x -> 16.1.1 I fear…
But given I prefer services to containers and even more to (Ubuntu) VMs, it may be worth a shot. (I had become a bit cautious though since the (similarly complex)
services.jitsi-meet on a
NixOS VPS was a huge rabbit hole and I had to settle for a self-made
docker-compose deployment eventually, after sinking hours of hacks trying in vain to get the
services module to behave suitably for production use)
It is a gitlab schema thing as well. They only support restoring backups on the same version that made them. Many versions, even minor ones, include schema migrations.
In addition, there are some “milestone” release that a long-distance upgrade has to go though: Upgrading GitLab | GitLab
I’m facing a similar choice, I currently have an omnibus gitlab running in an ubuntu container (on SmartOS), and have been meaning to get around to migrating to nixos. At least it’s up to date, so I just have to pick a moment when the two versions are in sync.
It almost looks as though it migth be a better (easier) choice to use an
omnibus install just for the upgrade ancient → recent, independent of the final deployment model…(?).
Yeah, I’d have recommended that if the OP was already on Omnibus. I assume it may still be easier to do it that way via docker, except they already got blocked on that path, so I’ll just leave any recommendations out.
Yeah, though I might reconsider, it appears the “brick wall” I (OP) was talking about was just the reluctance to have to setup a parallel environment because a live upgrade would have been too risky with these breaking changes… Since this has to be done anyway, I guess that card is back on the table too…
Small update: migration completed (now on
16.1.1), I ended up using a quick&dirty
podman start script that takes the image tag as argument, and has all other config hard-coded. Then just backup on the old host, restore on the same version new container instance, and then just do all incremental upgrades by redeploying the same container from increasing image version tags.
I had to start over once, because I wasn’t aware some background migrations hadn’t finished when moving to the next version, which broke the next migration (verrry important!).
About half-way, since background migrations are running online, I could just repoint the reverse proxy and expose the new instance to production (with known occasional brief outages for each next version deployment).
So in principle trivial, but quite tedious (but yes, this is of course due to not following recent versions in the first place)
I gues now the database is up to date, I can do what I want regarding the final deployment: either move to the
services.gitlab module or stay with the podman instance and include a small
nixos/systemd wrapper for it.
You can wrap containers in NixOS through
virtualisation.oci-containers so you don’t have to write systemd units manually.
Thanks, I know, I can probably use that here indeed (I have instances where I had to add boiler plate though, for initialisations/orchestration and the like)
You could still inject an
ExecStartPre bit or a dependency on a custom unit for any pre-work and then still have NixOS otherwise handle things, just FYI.