Successful Pleroma configuration example

Highly respected community,

I’m trying to deploy Pleroma server on my NixOS system. I’ve tried several variants, such like, writing my own module that describes installation process, using ready solution from here: NinjaTrappeur/pleroma-otp-nixos - pleroma-otp-nixos - Alternativebit

But all those solutions failed, due to lack of my knowledge or due to different technical reasons.

So I would be very grateful, if someone, who succeeded in running Pleroma server on NixOS, would share his/her configuration files with me, because I seem to be out of ideas :frowning:

using ready solution from here:
NinjaTrappeur/pleroma-otp-nixos - pleroma-otp-nixos - Alternativebit

Hey, I’m the author of this module. I currently use it for my own
instance. I actually started to rework it this week to upstream it to

Could you expand a bit on what failing exactly?

Hello there. Yep, I was unable to make Pleroma working. I examined the module in your self-hosted git repository and created a configuration snippet, based on those options and also on example, you provided in a plenty of modifications)

{ pkgs, ... }:
  pleromaModuleSrc = builtins.fetchTarball {
    url = "";
    sha256 = "1rzq1nwapxfdq10b3xk5pmf3c22ig026bg5xb3cjay57yljp1b6s";
  imports = [ 

  security.acme = {
    certs = {
      "" = {
        webroot = "/var/www/";
	email = "";
	group = "nginx";
  environment.etc."setup.psql".text = ''
CREATE USER pleroma WITH ENCRYPTED PASSWORD 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx';
CREATE DATABASE pleroma OWNER pleroma;
\c pleroma;
--Extensions made by ecto.migrate that need superuser access
  services = {
    pleroma = {
      enable = true;
    postgresql = {
      enable = true;
      package = pkgs.postgresql_12;
      initialScript = "/etc/setup.psql";
    nginx = {
      enable = true;
      virtualHosts."" = {
        sslCertificate = "/var/lib/acme/";
	sslCertificateKey = "/var/lib/acme/";
        forceSSL = true;
        root = "/var/www/";
        locations."/" = {
          proxyPass = "http://localhost:4000";
          extraConfig = ''
            add_header X-XSS-Protection "1; mode=block";
            add_header X-Permitted-Cross-Domain-Policies none;
            add_header X-Frame-Options DENY;
            add_header X-Content-Type-Options nosniff;
            add_header Referrer-Policy same-origin;
            add_header X-Download-Options noopen;

            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Host $host;

            client_max_body_size 16m;

(This config, I posted there - is a final version of it. That’s the final state, with which, Pleroma refuses to work. Firstly, I haven’t got a declarative setup.psql creation. I had to generate with those commands, you advised to use in a readme. Same goes for config.exs. I’m creating it declaratively inside pleroma_config.nix as you can see in the import section. I won’t post it content here, due to privacy reasons).

First trouble, I bumped into, was an error, due to which Pleroma was unable to connect to the created Postgress database. I solved it pretty quick, when remembered, that you have to work with Postgresql from user postgres. So I had to add this code snippet:

users.users.pleroma = {
  extraGroups = [ "postgres" "wheel" ];

After that, I bumped into yet another error, that prevents me from running Pleroma for now:

systemctl status pleroma.service gives me this:

pleroma.service - Pleroma social network
     Loaded: loaded (/nix/store/210y3s3qwhzkhsq1rxnh5cgvjdszs63y-unit-pleroma.service/pleroma.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Sat 2020-11-07 11:12:45 UTC; 5min ago
    Process: 26977 ExecStartPre=/nix/store/99d52b50xpapd35f8m355g3pir149jsq-pleromaStartPre/bin/pleromaStartPre (code=exited, status=0/SUCCESS)
    Process: 27103 ExecStart=/nix/store/94fr23qvf3v3axiixvldahlp8srwf4w2-pleroma-otp-2.1.2/bin/pleroma daemon (code=exited, status=0/SUCCESS)
    Process: 27202 ExecStop=/nix/store/94fr23qvf3v3axiixvldahlp8srwf4w2-pleroma-otp-2.1.2/bin/pleroma stop (code=exited, status=1/FAILURE)
   Main PID: 27200 (code=exited, status=0/SUCCESS)
         IP: 24.4K in, 3.6K out
        CPU: 20.719s

Nov 07 11:12:20 ilchub systemd[1]: Starting Pleroma social network...
Nov 07 11:12:20 ilchub pleromaStartPre[26986]: /nix/store/94fr23qvf3v3axiixvldahlp8srwf4w2-pleroma-otp-2.1.2/erts- /nix/store/gjkaxkg9rlklm52q92bvjr1ghkg8qjwj-ncurses-6.2/lib/ no version information available (required by /nix/store/94fr23qvf3v3axiixvldahlp8srwf4w2-pleroma-otp-2.1.2/erts-
Nov 07 11:12:25 ilchub pleromaStartPre[26986]: 11:12:25.380 [info] Already up
Nov 07 11:12:25 ilchub systemd[1]: Started Pleroma social network.
Nov 07 11:12:25 ilchub pleroma[27202]: /nix/store/94fr23qvf3v3axiixvldahlp8srwf4w2-pleroma-otp-2.1.2/erts- /nix/store/gjkaxkg9rlklm52q92bvjr1ghkg8qjwj-ncurses-6.2/lib/ no version information available (required by /nix/store/94fr23qvf3v3axiixvldahlp8srwf4w2-pleroma-otp-2.1.2/erts-
Nov 07 11:12:26 ilchub pleroma[27202]: --rpc-eval : RPC failed with reason :nodedown
Nov 07 11:12:26 ilchub systemd[1]: pleroma.service: Control process exited, code=exited, status=1/FAILURE
Nov 07 11:12:45 ilchub systemd[1]: pleroma.service: Failed with result 'exit-code'.
Nov 07 11:12:45 ilchub systemd[1]: pleroma.service: Consumed 20.719s CPU time, received 24.4K IP traffic, sent 3.6K IP traffic.

I started searching for this error(no version information available) and found a clue on archlinux forum. There was said, that Pleroma uses ncurses 5(when NixOS is shipped with ncurses 6). So I forked your repo to mine repo on BitBucket(Bitbucket) and slightly modified default.nix to make it building ncurses from source with ABI version 5

buildInputs = with pkgs; [
    (ncurses.override { abiVersion = "5"; })

But after that, Pleroma even refused to install, because yet, current version of Pleroma requires ncurses6, and clue I found, was for an older version. But that sends me back to the error: no version information available. That’s a point, when I ran out of ideas :frowning:

I hope, I provided enough information. If you need to know something more, just let me know.


First of all, thanks for the detailed reply. It’ll help a lot for the module API design. Looks like I unintentionally provided you with a pretty big footgun :sweat_smile:

There are several things to unpack security-wise on your setup. First, you better keep the secrets outside of the nix store: it’s world-readable. IE. you probably want to remove both your environment.etc."setup.psql".text and your environment.etc."pleroma/config.exs".text attrs. You can either manage that manually, either go with a specific declarative solution able to side-load your secrets without using the store. See morph or nix-ops for instance (there are many other solutions!).

Regarding the extra groups you added, this is also a security issue. Generally-speaking, it’s better to keep any service-specific user out of the wheel group. Putting your user in the postgres group is also indirectly granting your user admin rights on all your postgres databases. Instead, you want to create a pleroma-specific user and login to postgres using this user. According to the snippet you sent, you are already creating this service user, you should use it in your pleroma configuration. Here’s my own database-related conf from /etc/pleroma/config.exs:

config :pleroma, Pleroma.Repo,
  adapter: Ecto.Adapters.Postgres,
  username: "pleroma",
  password: "$password",
  database: "$pleroma_db_name",
  hostname: "localhost",
  pool_size: 10,
  prepare: :named,
  parameters: [
    plan_cache_mode: "force_custom_plan"

I found a better way to handle these secrets, I’ll send a follow up message in this thread when it’ll be ready.

As for the module, you’re currently using the master branch instead of the release I’m pointing to in the readme file. I changed the API quite a lot 3 days ago and did not updated the documentation. You should use the tarball url from the readme ie.

Thanks for the ncurses trick. This message is a warning though, I have it as well on startup despite my instance running just fine. It’s probably not the reason your pleroma service is failing. The error seems to be Erlang-related. I did not manage to reproduce it on Nixos-unstable, the VM test is working fine. I’m a bit confused on that one.

I did not test this module on NixOS stable, that could be the source of the problem here.

It’s hard for me to figure out what’s wrong on your setup without the full configuration. Could you reproduce that through a VM test and share it with me? If you can’t, you can contact me via email on ninjatrappeur@alter(…).fr or xmpp ninjatrappeur@chat.alter(…).fr (you can find what’s redacted by the (…) by clicking on my profile), it’ll make things easier for us to figure out what’s wrong.

OK, I’ll try to reproduce it. In which form do you want to receive a reproduction environment? Should it be VirtualBox VM, exported to OVA, or KVM(libvirt) VM XML file with disk image, or there’s special solution out there, designed specially for NixOS?

That being said, a OVA would be fine as well. Anything you’re comfortable with will do the trick.

I did manage to create a repro through a NixOS VM test of the bug you’re facing.

I’m not 100% sure what’s happening there, I’ll let you know when I fix it.

As promised, I reworked the module, added some doc and a NixOS VM test.

It’s PRd there:

The VM test triggered a couple of bugs. I think yours was related to the tz-data updater trying to write in the store. I still don’t understand why my live instance is not facing this bug…

The NixOS test is running successfully. It might be a good starting point for you.

Obviously, do not replicate the secret management from the test.

Your review is more than welcome!

Hello there. I’ve created DigitalOcean droplet, where I reproduced this bug, as you requested. Please generate SSH keypair, so I can grant you access to this machine. You can mail me a public key there: (and please, write me there, from which mail address I have to expect SSH key arrival)