Hostname for containers

I writing a nix module for my discord bot, but I need to provide the hostname for my bot to connect to my lavalink container

let
  projectPath = builtins.toPath "/home/redhawk/code/Bongo-Bot";
in
{
  virtualisation.oci-containers.containers = {
    bongo-bot = {
      image = "ghcr.io/redhawk18/bongo-bot:latest";
      dependsOn = [ "lavalink" ];
      environmentFiles = [ "${projectPath}/.env" ];
      volumes = [ "${projectPath}/bongo.sqlite:/app/bongo.sqlite" ];
    };

    lavalink = {
      image = "ghcr.io/lavalink-devs/lavalink:3-alpine";
      ports = [ "2333:2333" ];
      volumes = [
        "${projectPath}/application.yml:/opt/Lavalink/application.yml"
      ];
    };

  };
}

current I dont see anyway to do this

Nevermind, I misremembered, been a while since I last set up docker networks

Are you looking for:

let
  projectPath = builtins.toPath "/home/redhawk/code/Bongo-Bot";
in
{
  virtualisation.oci-containers.containers = {
    bongo-bot = {
      image = "ghcr.io/redhawk18/bongo-bot:latest";
      dependsOn = [ "lavalink" ];
      environmentFiles = [ "${projectPath}/.env" ];
      volumes = [ "${projectPath}/bongo.sqlite:/app/bongo.sqlite" ];
    };

    lavalink = {
      image = "ghcr.io/lavalink-devs/lavalink:3-alpine";
      ports = [ "2333:2333" ];
      volumes = [
        "${projectPath}/application.yml:/opt/Lavalink/application.yml"
      ];
      extraOptions = [
        "--hostname" "lavalink"
      ];
    };

  };
}

The simplest way for two containers like this is to link them: Legacy container links | Docker Docs

I suspect in theory connecting to localhost instead could work too, assuming all communication happens over port 2333, but I’m less sure of that.

A more idiomatic solution is to create a docker network for these two containers, at which point lavalink will already be known as lavalink on that network, as that’s the container name, and you wouldn’t need to specify anything else.

This is somewhat tedious to configure on NixOS though, because of annoying limitations of how docker does networking.

To do so, you will need to create a oneshot systemd service that lingers and runs e.g. docker network create discord-bot, and then add a --network=discord-bot to the extraOptions of your containers. You’ll also need to add a dependency (something like this + after, to the systemd service of the containers, not the containers) to depend on the creation of that network.

I think you can also directly write a JSON file somewhere, which someone showed off last time I complained about this mess, but that’s clearly not endorsed by docker.

Technically you can also just create the network once from the command line, but that’s imperative and will mean your config can’t easily move from this system. There’s a reason I stopped using docker.

Podman pods are much nicer, but also not properly supported by the NixOS module, probably because nobody wants to break the docker/podman compatibility and because of the relative lack of maintenance of this module in general. Systemd sandboxing is just better. You can always use podman pods (or docker-compose, which does similar things with a warning not to use it in production) without the NixOS module system, of course…

Or I could give you my three-year-old pod module, but now we’re shaving yaks to shave yaks.

wow, in the docker land it’s this easy to set up

networks:
  lavalink:

What other options do I have for creating images? But I guess I’m forced using docker because the lavalink people only do docker images.

i created a network docker network create lavalink
and restarted the services and ran prune and it wasn’t used for either container.

NETWORK ID     NAME       DRIVER    SCOPE
b7f1c0fd8a3d   bridge     bridge    local
4db5814642e1   host       host      local
ae7ab4861658   lavalink   bridge    local
3e18fd12687d   none       null      local


Before I go to bed this is my whole config thats updated

let projectPath = builtins.toPath "/home/redhawk/code/Bongo-Bot";
in {
  virtualisation.oci-containers.containers = {
    bongo-bot = {
      image = "ghcr.io/redhawk18/bongo-bot:latest";
      #cmd = [ "--production" ];
      dependsOn = [ "lavalink" ];
      environmentFiles = [ "${projectPath}/.env" ];
      extraOptions = [''
        --network=lavalink
        --pull=newer 
      ''];
      volumes = [ "${projectPath}/bongo.sqlite:/app/bongo.sqlite" ];
    };

    lavalink = {
      image = "ghcr.io/lavalink-devs/lavalink:3-alpine";
      extraOptions = [''
        --hostname=lavalink
        --network=lavalink
        --pull=newer 
      ''];
      ports = [ "2333:2333" ];
      volumes =
        [ "${projectPath}/application.yml:/opt/Lavalink/application.yml" ];
    };

  };
}

That’s docker-compose, which was supposedly not to be used in production, at least as of when I last read its documentation, which admittedly is a while ago. It does fix a lot of docker’s UX problems when used on a single host. Seems like it was finally integrated into docker peoper a few months ago: Migrate to Compose V2 | Docker Docs

You can use this project to have integration with NixOS for it, which will make things quite a bit easier: GitHub - hercules-ci/arion: Run docker-compose with help from Nix/NixOS

Docker images are OCI compliant. Assuming the lavalink people don’t abuse docker-internal things (doesn’t look like they do, otherwise you’d need to run the container in privileged mode), you could also use podman.

You could also package lavalink for nix and write a module for it. You’d want to write a systemd service something like this, if you want docker-like use of namespaces you’d use systemd’s namespacing features with it: GitHub - lavalink-devs/Lavalink: Standalone audio sending node based on Lavaplayer.

The existence of those docs suggest this is largely a workflow upstream supports. Personally, having used docker-on-NixOS for a few years in the past, I would do this instead of using docker.

Whether nix packages count as images is a different question, they were designed to solve more or less the same problems, but if you want to use specifically upstream’s image you’re likely out of luck here. Java applications don’t tend to need that though.

And in fairness, looks like you’re close enough to a working setup. It will just be a bit painful to manage updates, since you’re effectively using a third party package manager.

Odd! What does docker network inspect lavalink give you when the containers are running?

I don’t think that docker will unintuitively interpret a network name that is equal to a container name (because there is special functionality around setting --network to a container name), but maybe it does.

Thinking about what you said before, I was going to host a Yuzu multiplayer room, and I quit because I could never connect to it. I think the same problem is happening here. I’m going to switch the backend on Nixos to Docker. Then test everything out again, I have a feeling docker can’t connect podman containers to a docker network.

─redhawk🎉Paisley-Park─ ~ 
 â””$ docker network connect lavalink lavalink 
Error response from daemon: No such container: lavalink

So after adding these virtualization targets I then tested again

virtualisation = {
    containers.enable = true;

    docker = {
      enable = true;
      autoPrune.enable = true;
    };

    oci-containers.backend = "docker";

    podman = {
      enable = true;
      autoPrune.enable = true;
    };

  }; 

However, now my code doesn’t think theres a database

    bongo-bot = {
      image = "ghcr.io/redhawk18/bongo-bot:latest";
      dependsOn = [ "lavalink" ];
      environmentFiles = [ "${projectPath}/.env" ];
      volumes = [ "${projectPath}/bongo.sqlite:/app/bongo.sqlite" ];
    };

It can’t find the bongo.sqlite which podman could find?

Terminology question here; are you running two podman containers and trying to use the docker command to connect them to a docker network? I’d assumed you were using docker, but now that you point it out, I guess ghcr.io means it’s not docker containers?

Yes, this will not work. Podman has more sane behavior around networks (and in general), but I’d need to have a short dive back into the docs to see how that worked again.

Odd. Since docker runs from a daemon, while podman does not, maybe permission issues are involved?

Right, it also has a podman network create: podman-run — Podman documentation

The key difference is that you can connect to multiple networks with podman run, and don’t have that odd requirement that you need to connect to every network after the first with podman network connect, which creates some really complicated scripting for systemd services.

Don’t forget to enable DNS for containers and the container names will automatically be used as the hostnames. This will work with the default podman network as well, but you have to remember not to bind other DNS servers to 0.0.0.0 since they will conflict with the podman DNS.

I have a setup using multiple podman containers around a separate network in this way, but I can’t get to it to share it until i get home.

i tried simply starting my containers, and then connecting them and i can’t find the id. Both are running fine according the systemd, but this is what I get

─redhawk🎉Paisley-Park─ ~ 
â””$ podman ps
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES


When I try to list hostnames on my machine, I get localhost and my machine’s hostname. Nothing else

Here are some of my configs.
I put my whole nextcloud setup in here to show how my servers are linked via hostname.

I setup a network with this:

  environment.etc."/containers/networks/podman1.json" = 
  let
    json = pkgs.formats.json {};
    podman1_settings = {
      name = "podman1";
      id = "4b65a30030d96b0e930be01ec05b749ff36e2991b7a08c2877862561e945366a";
      driver = "bridge";
      network_interface = "podman1";
      subnets = [
        {
          subnet = "10.89.0.0/24";
          gateway = "10.89.0.1";
        }
      ];
      ipv6_enabled = false;
      internal = false;
      dns_enabled = true;
      ipam_options = {
        driver = "host-local";
      };
    };
  in
  {
    source =  json.generate "podman1.json" podman1_settings;
  };

I got the information for this by doing a podman network create and then transcribing the options into nix format. I did add dns_enabled=true since that turns on dns for the hostnames.

Then I just use that network with extraOptions = [ “–network=podman1” ];

  virtualisation.oci-containers.containers.mariadb = {
    image = "mariadb:10.6";
    cmd = [
      "--transaction-isolation=READ-COMMITTED"
      "--log-bin=binlog"
      "--binlog-format=ROW"
    ];

    volumes = [ 
      "/share0/dockers/mariadb:/var/lib/mysql"
    ];
    environment = {
      MARIADB_ROOT_PASSWORD_HASH = "*9B7863D894E4A77FCABC38CF9D3DD5F3D69374BC";
      MARIADB_PASSWORD_HASH = "*065F4CFE02DF4D7E93A3C5F89DAFAB25E69C6DB6";
      MARIADB_DATABASE="nextcloud";
      MARIADB_USER="nextcloud";
    };
    extraOptions = [ "--network=podman1" ];
  };
  virtualisation.oci-containers.containers.redis = {
    image = "redis:latest";
    volumes = [ 
      "/share0/dockers/redis:/data"
    ];
    extraOptions = [ "--network=podman1" ];
  };
  users.users.nextcloud = {
    # mkpasswd
    uid = 33;
    hashedPassword = "*";
    description = "Nextcloud container";
    group = "nextcloud";
  };

  users.groups.nextcloud = {
    gid = 33;
  };

  virtualisation.oci-containers.containers.nextcloud = {
    image = "nextcloud:latest";
    ports = [
      "8080:80"
    ];
    volumes = [ 
      "/share0/dockers/nextcloud:/var/www/html"
      "/share0/family:/var/family"
      "/share0/archive:/var/archive"
    ];
    dependsOn = [ "mariadb" "redis" ];
    environment = {
      MYSQL_PASSWORD=""; # removed for post
      MYSQL_DATABASE="nextcloud";
      MYSQL_USER="nextcloud";
      MYSQL_HOST="mariadb";
      REDIS_HOST="redis";
    }; 
    extraOptions = [ 
      "--network=podman1"
      # "--gidmap=33:84400006"
    ];
  };

I should mention that I also have defaultNetwork.settings.dns_enabled = true; set.
This worked when I first setup my podman, but it doesn’t now for some reason. I originally only went with a second network because it was causing me problems with another dns service. It took me awhile to realize that it was because the other service was binding to 0.0.0.0:53 which conflicted on every network. Once I worked that out I just never went back to using the defaultNetwork. I will admit that when I do a netstat I do not see any dns listening on the defaultNetwork now even though this option is set.

At this point I just make a new network any time I have a group of containers that I want to talk.

This is probably all done better with pods, but I don’t know how to do those yet … and I don’t know how to do them in Nix.

Where does id come from and what is the point of it? Is that the id when you do podman network ls?
And why do you create a json? And where is source used?

let me post everything I have, I created a network called lavalink with podman network create lavalink

NETWORK ID    NAME        DRIVER
ec123baea935  lavalink    bridge
2f259bab93aa  podman      bridge

and this is my config

{ pkgs, ... }:
let
  projectPath = builtins.toPath "/home/redhawk/code/Bongo-Bot";
  json = pkgs.formats.json { };
  network = {
    name = "lavalink";
    id = "ec123baea935";
    driver = "bridge";
    network_interface = "lavalink";
    dns_enabled = true;
  };
in {
  virtualisation.oci-containers.containers = {
    bongo-bot = {
      image = "ghcr.io/redhawk18/bongo-bot:latest";
      dependsOn = [ "lavalink" ];
      extraOptions = [''
        --network=lavalink
      ''];
      environmentFiles = [ "${projectPath}/.env" ];
      volumes = [ "${projectPath}/bongo.sqlite:/app/bongo.sqlite" ];
    };

    lavalink = {
      image = "ghcr.io/lavalink-devs/lavalink:3-alpine";
      extraOptions = [''
        --network=lavalink
      ''];
      ports = [ "2333:2333" ];
      volumes =
        [ "${projectPath}/application.yml:/opt/Lavalink/application.yml" ];
    };

  };
}

The problem is the lavalink container cannot find the network to start.

When you use podman network create, it creates a json file inside etc/containers/networks. I used the file that it creates to get the id (and other settings) to put into my configuration.nix so that it would be documented as part of the build, recreated with rebuilds (so not lost with reboot), and be easy to modify. This nix code just recreates that same file in etc/containers/networks.

source is environment.etc.“/containers/networks/podman1.json”.source
which defines the contents of that file.

I don’t think that your let network statement is actually doing anything (though I am far from a Nix expert). Since your network is already showing up with podman network ls; you might want to just check the lavalink file at /etc/containers/networks/ and see if dns_enabled is turned on in it.

If you can get it to work with the file, I still advise moving the contents of that file into your configuration.nix. Nixos doesn’t like to keep manually modified /etc files after reboot.

I dont have a /containers folder at all?

 â””$ ls /
bin  boot  boundman  dev  etc  home  lost+found  nix  opt  proc  root  run  srv  sys  tmp  usr  var

Try:
ls /etc :slight_smile:
It should be there.

This is currently my network

{ pkgs, ... }:
let
  json = pkgs.formats.json { };
  network = {
      name = "lavalink";
      id = "ec123baea9350a6a4447d389856a83d0ff9f130cee268b4476ca03c7578f1ccb";
      driver = "bridge";
      network_interface = "lavalink";
      created = "2023-09-12T13:00:57.33983809-04:00";
      subnets = [
            {
                subnet = "10.89.0.0/24";
                gateway = "10.89.0.1";
            }
      ];
      ipv6_enabled = false;
      internal = false;
      dns_enabled = true;
      ipam_options = {
            driver = "host-local";
      };

  };
in {
  environment.etc."/containers/networks/lavalink.json" = json.generate "lavalink.json" network;
}

however I get this error I’ve never seen

error: The option `environment.etc."/containers/networks/lavalink.json".__ignoreNulls' does not exist. Definition values:
       - In `/etc/nixos/servers/bongo-bot.nix': true
(use '--show-trace' to show detailed location information)

Also I dont seem to have a network in this folder

─redhawk🎉Paisley-Park─ /etc/containers/networks 
 â””$ ls
podman.json