Disnix Pi Cluster

Hey all,

So I’ve got a few raspberry pis imaged with the 18.09 SD build from Hydra.

I’ve found that NixOps won’t quite work for provisioning a servers-running-services kind of architecture (if you’ve found otherwise, please let me know). It wants grub to be enabled, which conflicts with the template.

So after a lot of HDMI port swapping and repetitive configging (I swear I can recite the template ARM configuration.nix from memory now), I now have a cluster with dysnomia and disnix installed (and ssh is now properly set up, so no more HDMI swapping).

Thing is, I have yet to get a configuration working…

Anyone (@sander perhaps??) know how to properly specify the infrastructure model (and/or others) for such a situation?

I’m on mobile at the moment, but I’ll post my current configs tomorrow.

As a note tho, I do have system = "aarch64-linux"; set for each node defined in the infrastructure model and I’m using --build-on-targets

I’ve gotten Disnix working on x86, and NixOps on ARM, but never tried Disnix on ARM. I’m interested to see how this works out. Post your configs when you have a chance.

Ooh- I’d love to talk NixOps configs at some point so I can manage the OS-level particulars from a management server. =]

As for the Disnix setup, forgive my newbishness as this is my first foray into the tool…

Here’s the /etc/nixos/configuration.nix for each device (the only difference among them is networking.hostname):

{ config, pkgs, lib, ... }:

{
  boot.loader.grub.enable = false;
  boot.loader.generic-extlinux-compatible.enable = true;
  boot.kernelPackages = pkgs.linuxPackages_latest;
  boot.kernelParams = ["cma=32M"];

  fileSystems = {
    "/boot" = {
      device = "/dev/disk/by-label/NIXOS_BOOT";
      fsType = "vfat";
    };
    "/" = {
      device = "/dev/disk/by-label/NIXOS_SD";
      fsType = "ext4";
    };
  };

  swapDevices = [ { device = "/swapfile"; size = 1024; } ];

  services.openssh.enable = true;

  networking.hostName = "nixberrypi1";
  networking.firewall.allowedTCPPorts = [ 22 ];

  environment.systemPackages = with pkgs; [ dysnomia disnix ];

  users.users.mwilson = {
    isNormalUser = true;
    extraGroups = ["wheel"];
    openssh.authorizedKeys.keys = [
      "ssh-rsa ..."
    ];
    packages = with pkgs; [
      git
    ];
  };

  security.sudo.wheelNeedsPassword = false;
}

For the disnix models, I tried two projects. One that is an attempt to use the NixOps example at a different architectural level (microservices instead of servers-running-services), while the other is just an attempt to reproduce the StaffTracker example on some raspberry pis.

I’ll post for the valgrind documentation example first- here’s the infrastructure.nix:

let
  pi3bPlus =
    { ip, coreCount ? 4}:
    { properties = {
        hostname = ip;
      };
      system     = "aarch64-linux";
      numOfCores = coreCount;
      containers = { # Disnix complained when I left this out the first time
        nginx-static-content = {
          nginxPort = 3000; # not sure where this gets used =/
        };
      };
    };

  sn       = "192.168.86."; # sn == subnet
in
{
  nixpi0 = pi3bPlus { ip = "${sn}74" ; };
  nixpi1 = pi3bPlus { ip = "${sn}189"; };
  nixpi2 = pi3bPlus { ip = "${sn}81" ; };
  nixpi3 = pi3bPlus { ip = "${sn}190"; };
}

Here’s the services.nix:

{distribution, invDistribution, system, pkgs}:

let customPkgs = import ../top-level/all-packages.nix { inherit system pkgs; };
in
rec {
### Static web content

  staticserver = {
    name = "staticserver";
    pkg = customPkgs.staticserver;
    type = "nginx-static-content";
  };
}

The distribution.nix:

{infrastructure}:

{
  staticserver = with infrastructure; [ nixpi0 nixpi1 nixpi2 nixpi3 ];
}

all-packages.nix:

{system, pkgs}:

let
  callPackage = pkgs.lib.callPackageWith (pkgs // self);

  self = {

  ### Static web content

    staticserver = callPackage ../pkgs/staticserver/valgrindDocs { };
  };
in
self

aaaand here’s the default.nix for the staticserver (the piece that I’m sure is the most wrong…):

{ stdenv, nginx, valgrind }: # I don't think nginx is needed for the build here
{ ... }:

stdenv.mkDerivation {
  name = "valgrindDocs";

  buildInputs = [ nginx ];  # this is more of a runtime dependency than a build dependency
                            # I only put it here because I'm rushing through this and likely misunderstanding

  # Generate config file
  buildPhase = '' # Not sure if this is the place to do this, but it seems okay considering the example
    cat > nginx.conf <<EOF
    daemon off;
    worker_processes 2;
    events {
      use epoll;
      worker_connections 128;
    }
    error_log logs/error.log info;
    http {
      server_tokens off;
      charset utf-8;
      access_log logs/access.log combined;
      server {
        server_name localhost;
        listen 127.0.0.1:3000;
        error_page 500 502 503 504 /50x.html;
        location / {
          root ${valgrind.doc}/share/doc/valgrind/html; # manually starting an nginx server with this config works locally
        }
      }
    }
  ''
  ;

  installPhase = ''
    mkdir -p $out/config
    cp -av * $out/config # I just want $out/config/nginx.conf to exist so that nginx will pick up on the correct doc-root when activated
  ''                       # I have no idea how this gets activated tho- still going through the documentation/examples
  ;
}

The end goal is to get this finished and load-balanced, then do a write-up/talk on this particular example at various levels of infrastructure control (person managing desktop vs nixops managing nodes vs disnix managing microservices (and hopefully, eventually, vs dydisnix managing a self-healing microservices infrastructure))

I don’t see where your service is defined. Are you trying to define a new service (disnix overloads the term “service”). Specifically, where is “nginx-static-content” defined? You can deploy the package easily with environment.systemPackages, but activating it requires some specific code. The “wrapper” type might be what you are looking for.

Note: I’ve had discussions with @sander about having an automatic systemd wrapper such that any NixOS systemd modules could be converted into Disnix compatible container. This doesn’t exist yet.

See a list of available modules here

Ooh- there’s definitely some confusion there.

I think in going through the Disnix documentation and examples, I’ve only implemented half the puzzle. I definitely didn’t go into any Dysnomia configuration just yet (I wanted to get the Disnix portion up first…), but from what I understand, those ‘containers’ in the infrastructure.nix are Dysnomia containers, so that makes sense (and also implies that Disnix/Dysnomia are… inseparable? Not that that’s a problem, just something I didn’t put together just yet).

I think by virtue of what I’ve written so far, I am defining a new service, but now that I look through the available Dysnomia modules, that may not be necessary- apache-webapplication looks very promising for the static web content portion.

@sander has a loadbalancer in the examples that I’ll heavily pull from as well. =]

I love the idea of porting extant NixOS modules to Disnix via a systemd Dysnomia module (that would lower the Enterprise barrier to entry for Disnix as well- once we (me, a few coworkers, and anyone else who wants to join I guess) have updated the Azure SDK and ensured we can provision VPNs for at least Azure/AWS/GCE in NixOps, I’m thinking about hacking in some cost analysis features (think turbonomic) which Dydisnix could hopefully piggy back off of for self-healing cloud-agnostic infrastructures at optimally-low cost =] but that’s the BHAG).