Run nginx with SSL in nixos-container

I would like to serve multiple domains with one server, e.g. a.example.com and b.example.com. And I would like to run a separate nginx service in separate containers for this.

My current config is:

{ config, pkgs, ... }:
  {
  networking.hostName = "<myhostname>";
  networking.nat = {
    enable = true;
    externalInterface = "enp3s0";
    internalInterfaces = [ "ve-+" ];
  };
  networking.firewall = {
    allowedTCPPorts = [ 80 443 ];
  };

  containers.nextcloud = {
    config = {
      networking.firewall.allowedTCPPorts = [ 80 443 ];
    
      security.acme.acceptTerms = true;
      security.acme.defaults.email = "mail@example.com";
    
      services.nginx = {
        enable = true;
        recommendedGzipSettings = true;
        recommendedOptimisation = true;
        recommendedProxySettings = true;
        recommendedTlsSettings = true;
    
        virtualHosts = {
          "a.example.com" = {
            root = "/var/www";
            forceSSL = true;
            enableACME = true;
          };
        };
      };
    };
    autoStart = true;
    privateNetwork = true;
    hostAddress = "10.250.0.2";
    localAddress = "10.0.0.2";
    forwardPorts = [{
      protocol = "tcp";
      hostPort = 80;
      containerPort = 80;
    } {
      protocol = "tcp";
      hostPort = 443;
      containerPort = 443;
    }];
  };

  containers.somethingelse = {
    config = {
      networking.firewall.allowedTCPPorts = [ 80 443 ];
    
      security.acme.acceptTerms = true;
      security.acme.defaults.email = "mail@example.com";
    
      services.nginx = {
        enable = true;
        recommendedGzipSettings = true;
        recommendedOptimisation = true;
        recommendedProxySettings = true;
        recommendedTlsSettings = true;
    
        virtualHosts = {
          "b.example.com" = {
            root = "/var/www";
            forceSSL = true;
            enableACME = true;
          };
        };
      };
    };
    autoStart = true;
    privateNetwork = true;
    hostAddress = "10.250.0.3";
    localAddress = "10.0.0.3";
    forwardPorts = [{
      protocol = "tcp";
      hostPort = 80;
      containerPort = 80;
    } {
      protocol = "tcp";
      hostPort = 443;
      containerPort = 443;
    }];
  };
}

The server at a.example.com is reachable via the internet and works (obviously all names have been changed). When I try to reach https://b.example.com, I get an SSL certificate error in my browser saying that the certificate refers to a.example.com and if I continue in insecure mode I get the content of a.example.com served.

Probably, I misunderstand how the port forwarding works. Is it possible to forward one port twice? Probably not. But if I remove the port forwarding, both web servers cannot be reached.

Is my whole approach even valid? Or am I misunderstand something fundamental?

Any help is very much appreciated. I also volunteer to write the respective entry in the nix wiki or the nix manual to help future users.

1 Like

You can’t have two services listening on the same port, that’s fundamentally not how ports work. If you want to achieve this, you need to use a reverse proxy to send the requests to the correct service depending on the domain requested.

People typically use nginx for this, you could set up a virtual host for each of your containers: services.nginx.virtualHosts. Then use services.nginx.virtualHosts..locations..proxyPass to pass the request to the correct service.

At that point you would probably make the internal services speak http and have the reverse proxy instance manage your certificates. There are also settings for that: services.nginx.virtualHosts..sslCertificateKey.

It’s not per-se tautologous to have nginx send requests to nginx, especially when containers are involved, but it’s worth thinking about why you want to do this. You almost definitely don’t want to use virtual hosts for the internal services though.

Why are you using a different web server for each site? Are these static web pages?

Thanks a lot for your quick replies!

If you want to achieve this, you need to use a reverse proxy to send the requests to the correct service depending on the domain requested.

I tried this but ended up having certificate errors.

At that point you would probably make the internal services speak http and have the reverse proxy instance manage your certificates.

I already tried this but it did not work. I do not remember the specific reason anymore but if this is the proper approach, I will try it again and post my problems here below.

It’s not per-se tautologous to have nginx send requests to nginx, especially when containers are involved, but it’s worth thinking about why you want to do this. You almost definitely don’t want to use virtual hosts for the internal services though.

Why are you using a different web server for each site? Are these static web pages?

Could you expand on this? I plan to run different services in the containers (like a Nextcloud instance and a gitlab instance or so) and tweak the permissions accordingly to have some increased amount of security.

I do this too: https://gitea.tlater.net/tlaternet/tlaternet-server/src/branch/master/configuration/default.nix

But since then I learned about systemd’s excellent sandboxing features, which are starting to be adopted by NixOS services. They’re a more well-integrated replacement for containers for, well, sandboxing.

Check if the services you want to deploy already use this, and then maybe reconsider whether you need to use containers.

With that said, even services deployed without any sandboxing at all would require a reverse proxy in this case. In fact, the default nextcloud config configures an nginx virtualHost for you.

Configuring an nginx instance separate from what the NixOS module configures is unnecessary, though, you can just bind the nextcloud container’s 80 port to e.g. 3080 and then use an nginx that listens on your real port 80 to proxyPass to 3080 for the nextcloud subdomain. Amusingly, because nextcloud itself isn’t actually a web server, this will be an nginx → nginx situation, but for other web services it won’t.

This.

Between setting up different users for every service, systemd sandboxing, and the awesome power of nix for providing software you have everything you need. To a point where you don’t really need to containerize.

A single instance of nginx to serve it all up. Nice and simple.

1 Like

Thanks for pointing into this direction! Do you have an example how to set it up within nixOS?

Firstly, for a lot of services it’s already set up if you just use the default NixOS options (i.e. .enable them, and change whatever you like with the public interface). This might be of interest: Systemd sandboxing audit (though it seems to be down right now?).

You can also look at the sources, and see if you spot something like this to verify if a service has it: https://github.com/NixOS/nixpkgs/blob/3b5295131d4504fa422c45c08c808bf9da656476/nixos/modules/services/web-servers/nginx/default.nix#L878

Which is also a good example :slight_smile: Basically, inside serviceConfig you can set any systemd service setting you like. This includes all of the settings from systemd.exec. The nginx service is a pretty good example, but I read through the systemd.exec page whenever I set one of these up.

For nextcloud specifically, there’s not much you’ll need to think about, since - as I mentioned - it uses nginx (and whichever sql database you go with), which as you see is already hardened: https://github.com/NixOS/nixpkgs/blob/45b92369d6fafcf9e462789e98fbc735f23b5f64/nixos/modules/services/web-apps/nextcloud.nix#L974

If you wanted to be paranoid, you could consider hardening the cron and plugin update services downstream (or just sending a PR upstream): https://github.com/NixOS/nixpkgs/blob/45b92369d6fafcf9e462789e98fbc735f23b5f64/nixos/modules/services/web-apps/nextcloud.nix#L904

Or use DynamicUser for this :slight_smile: I guess it’s not actually a security improvement, though.

This post contain future opinion , as should be treated as ‘learning resource’.

a note from year 2038. (where i am from).

IPV6 (or some other addressing scheme) , allows each end point service to have it’s own address and known ‘port’.

Thus reducing the complexity of protocols. name stacking and SSL stacking, was a fudge for the problem of shortage of usable ipv4 addresses on the internet. In 2038, we don’t have these multiplexing technologies, they are no longer needed.

However, with most things, a shortage means a market! IPV4 is no expection.

https://auctions.ipv4.global/

and even if you do get your own provider independent address space, you still need to run border gate way protocol version 4 to get it work. It’s not easy to do, and it’s expensive!!!

Complexity is the bane of reliability, security and speed.

So until the internet is uplifted somehow, so we can make protocols simpler, then we are stuck with these ‘middleware’ shenanigans, like https host headers, proxies, NAT, and , and middle companies like cloud flare.

I like what the kind souls at cloud flare do, but i don’t like the fact it has to exist at all. This is architecture problem…however architecture, especially in software can be modifited (software does not exists like physical architecture, the premise of a software defined network)

if your disagreement with me, these are the words of John Postel. (96% sure). ‘The internet should be end to end, my networking stack , should connected directly with yours, preserving the end to end nature of the internet’. ( i paraphrase).

We owe a lot to this soul…

Yes, but this is more about the limitations of DNS than IPv4. You need a reverse proxy to turn one domain name into multiple subdomains, and to disambiguate where the requests should go.

The only change in IPv6 land is that you have the option to send requests to a different IP address instead of port - you will still need something to decide which IP to send the requests to.

I think even when IPv6 becomes commonplace I will prefer to use reverse proxies for this, since it’s easier and takes less time than faffing about with my domain name registrar (and can be done through nix).

do you remember a time before reverse proxies? Can you tell me how you did HA then?

I advocate for less moving parts not more. Reverse proxies are bottle necks and another thing to misconfigure.

in 2038 we don’t use them, the network layer provides these services, not layer 7. :wink:

Layer 7 can never be trusted to ‘do the right thing’. I can tell you stories about reverse proxies that may make your eye’s water. :slight_smile:

You’ve got to help me get back to the future.

Wow, I was not aware that the nixOS folks have done this work already for me :slight_smile: I see that it does not make too much sense to run my services in separate (privileged) containers.

Thanks a lot for your help!

1 Like

I’ve come across this thread as I need to do the same thing. But one thing that I really can’t find resources for is basically:

When do I actually need to use NixOS-containers then? When are normal services not enough? What namespaces/features does systemd-nspawn support on a low level that systemd-services don’t?

These questions may be equivalent, but it would be nice to hear an expert opinion about any of these. I am assuming security requirement for production ready servers.