Hostname for containers

Try changing:
environment.etc."/containers/networks/lavalink.json" = json.generate "lavalink.json" network;
to
environment.etc."/containers/networks/lavalink.json".source = json.generate "lavalink.json" network;

I think you are missing the .source part.

It is strange you don’t have the file, but the network is showing up. You may have to delete that network before you do your nix build.

Also, I am pretty sure you did; but I wanted to make sure you removed:

oci-containers.backend = "docker";

or changed it to:

oci-containers.backend = "podman";
1 Like

After some other refactors this is what i have now

{ pkgs, ... }:
let
  projectPath = "/home/redhawk/code/Bongo-Bot";
  json = pkgs.formats.json { };
  network = {
    name = "lavalink";
    id = "ec123baea9350a6a4447d389856a83d0ff9f130cee268b4476ca03c7578f1ccb";
    driver = "bridge";
    network_interface = "lavalink";
    subnets = [{
      subnet = "10.89.0.0/24";
      gateway = "10.89.0.1";
    }];
    ipv6_enabled = false;
    internal = false;
    dns_enabled = true;
    ipam_options = { driver = "host-local"; };
  };
in {
  environment.etc."/containers/networks/lavalink.json".source =
    json.generate "lavalink.json" network;

  virtualisation.oci-containers.containers = {
    bongo-bot = {
      image = "ghcr.io/redhawk18/bongo-bot:latest";
      dependsOn = [ "lavalink" ];
      extraOptions = [''
        --network=lavalink
      ''];
      environmentFiles = [ "${projectPath}/.env" ];
      volumes = [ "${projectPath}/bongo.sqlite:/app/bongo.sqlite" ];
    };

    lavalink = {
      image = "ghcr.io/lavalink-devs/lavalink:3-alpine";
      extraOptions = [''
        --network=lavalink
      ''];
      ports = [ "2333:2333" ];
      volumes =
        [ "${projectPath}/application.yml:/opt/Lavalink/application.yml" ];
    };

  };
}

however, the network is still not found and the container does not launch

Sep 12 14:31:17 Paisley-Park podman-lavalink-start[223778]: Error: unable to find network with name or ID lavalink
Sep 12 14:31:17 Paisley-Park podman-lavalink-start[223778]: : network not found
Sep 12 14:31:17 Paisley-Park podman[223778]: 2023-09-12 14:31:17.369841351 -0400 EDT m=+0.025531968 image pull 85b6cc97b2c994105701b7f8fcba9b9798856d934180c42e9791b598144d97cc ghcr.io/lavalink-devs/lavalink:3-alpine
Sep 12 14:31:17 Paisley-Park systemd[1]: podman-lavalink.service: Main process exited, code=exited, status=125/n/a
Sep 12 14:31:17 Paisley-Park systemd[1]: podman-lavalink.service: Failed with result 'exit-code'.
Sep 12 14:31:17 Paisley-Park systemd[1]: Failed to start podman-lavalink.service.

It looks so close to bring right

and I am using podman, I’ve also reached my max replies for today.

It looks like it should be working. The only thing different in your config than mine is that your extraOption technically has 2 extra carriage returns in it for your --network option. I am not sure if it effects anything and it seems that if it did then it would be visible in the log, but:

You have:

      extraOptions = [''
        --network=lavalink
      ''];

The double single quotes should literally keep the carraige returns before and after your option. You can make it a single line in the array with:

    extraOptions = [ 
      "--network=lavalink"
    ];

Just use the double quotes on the line. No idea if this is causing the problem, but I can’t see anything wrong here. The configuration I posted does work and I have many more like it that are running on multiple other networks. I will let you know if I think of something more to try…

The podman network create defaults include DNS, so if you don’t hand-craft a json for it that isn’t necessary. In the docker world this interface doesn’t exist at all, but now we know this is about podman that’s a bit easier :wink:

That’s… Odd. Are you sure? This smells like an upstream bug, might be worth reporting. You’re sure this is not caused by adding that dns_enabled? It may actually pass all DNS requests to your host DNS server, but it’s not documented anywhere, so who knows…

Pods effectively work the same as the rest of the podman container infrastructure, the only difference is that you can more conveniently configure a little group of containers like this.

There also is no integration for them in NixOS yet, if you’re interested I have an old module that needs updating, refactoring, and perhaps upstreaming, but could be used to run pods fully from NixOS configuration: tlaternet-server/pods.nix at b6f39969cc50293b6dce493ef3273bc0573939f5 - tlaternet-server - Gitea: Git with a cup of tea

I suppose this could also be done with JSON files, but I prefer sticking to the more cleanly documented API.

Used e.g. like so for nextcloud: tlaternet-server/nextcloud.nix at b6f39969cc50293b6dce493ef3273bc0573939f5 - tlaternet-server - Gitea: Git with a cup of tea

Ultimately it’s just prettier though, there should be little real difference.

There is currently no services.podman.networks option or anything like that in NixOS. However, podman supports writing these json files to create networks instead, which makes things easier.

There is scarce documentation for this (which I believe to be the only official reference), so use at your own peril. It should still be relatively stable, as this is used to keep track of state internally, and non-podman sources (such as oracle’s documentation) do mention it, so it needs to be upgraded with podman anyway.

This is also why things like the id are hard to find explanations for, but you’ll probably find that id is just a generated string used for internal uniqueness and identification (generally what IDs are for).

You probably shouldn’t change the ID after you’ve started using it, but the NixOS config sets up a network with the id 000000000..0, so I don’t believe the actual string matters, as long as it’s hexadecimal with the same number of digits: https://github.com/NixOS/nixpkgs/blob/9a74ffb2ca1fc91c6ccc48bd3f8cbc1501bf7b8a/nixos/modules/virtualisation/podman/default.nix#L163


The question remains though, why is the network not found? While I agree you should be using double quotes, that clearly has no effect, otherwise this message would be impossible:

It shouldn’t matter because of the shell escape things in NixOS’ modules; except if the evaluated network name is actually lavalink\n.

With the environments.etc config, that lavalink.json file should now exist. What does cat /etc/containers/networks/lavalink.json give you? Would be nice to compare it to podman network inspect lavalink, to check the network is actually properly set up according to podman too.

And try a cat /etc/containers/networks/podman.json while we’re at it, if that file exists no podman networking should be working, and something is going horribly wrong with your NixOS activation.

You’re using nixos-rebuild switch, and not boot or build or something, right?

I did fix the network, the problem was hostname lookup. I will look into it more when I have my tech stack working. My problem currently is I need to create a named volume for podman for my postgres db. Before I was going to use Sqlite, but I found out my idea was flawed.

edit:
I had to add a volume and nixos containers fell apart when I did that. I just use a normal docker compose now, this isn’t the nixos way. But this all feels like lack of support.

:frowning:

I am sorry you feel this way, because there is no reason this shouldn’t work.

With dns_enabled = true in the network (and assuming you have no dns server already bound to 0.0.0.0:53 on the host) then the container’s name should be retrievable from dns as the hostname.

I do not understand how we got from hostnames to named volumes.

If you ever feel like trying again, let us know and we will try to help.
Also, I am not sure there is a ‘NixOS way’. There are certainly practices that the NixOS community seems to prefer for the same reason that they run NixOS (and because those methods lead to less problems down the road), but this community is generally pretty happy to get things to work however you are satisfied with them.

I am glad you got it working.

2 Likes

I have another example which uses Nextcloud in a Docker container.
The database is running as a normal NixOS service, however there are multiple containers linked together with a network.

In the end it was just an adaption of my old docker-compose.yml:

Yeah I have adguardhome running, that’s likely it. The problem with volumes is because I had a surprise bug with containers and databases.

Is it a good idea to have a normal instance of a database not in a container for a app? What if I use x version with one project and another uses y.

In that case it’s probably better to use containers.
However I usually have one system per service anyway.

I just find it easier to manage this way. The database version is usually not as important as all the other dependencies.

1 Like

Database APIs tend to be quite stable over versions, after all it’s ultimately SQL (which is standardized) with extensions, there’s not much wiggle room for breaking backwards compatibility. It’s very rare to have incompatibilities between database versions from the application end, at most you might need a newer version for one application because it uses a fancy new extension from postgres or such (this practically never happens, most applications want to be compatible with all SQL databases), and then you need to upgrade the database server, after which every other application will still work.

In exchange for this incredibly small risk, your database server can manage the data much more efficiently (both in terms of disk space and CPU/memory usage) than when it’s split across multiple instances, because that’s what databases are designed to do. There’s a whole mathematical theory behind their function, you do them a bit of a disservice by splitting them across containers (which is why noSQL became a thing, though that has largely turned out to be a fad as it’s much less efficient than relational databases, and unsurprisingly when you reject large amounts of mathematical theory you’re likely not great at keeping to the strict guarantees databases are supposed to make either).

Personally I also chose to run application-specific databases in containers when I used containers for service deployment, though mainly because it was easier to document configurations that way before I used NixOS.

Next time I get a chance I’ll try this again, storing the data on nixos in a database container was the breaking point for me.

At one point I tried to have a dedicated database server for better resource sharing.
However I then learned that network latency has a really large impact on performance that’s why I moved back to having the database on the same system as the application.

Yeah, networking will have an effect if you have it on a separate server, but splitting that local database into multiple local containers will definitely be less efficient :wink:

Wasn’t my experience. Maybe TT-RSS and Nextcloud weren’t properly optimized at the time or I was missing a cache the last time I tried it but it really made the services feel very slow.
After I moved the DB (containerized) back into the same VM as the service everything became useable again.
The DB server and the services were both running on the same host, therefore the network latency was as minimal as possible but still very noticeable.

Of course it is possible to do, especially with clustered systems, but for a smaller services the network penalty is in my experience to high to justify having a unified server.

1 Like