Is there currently a way to setup up minio in “Distributed Mode” via NixOS’ module? The dataDir option takes a list of paths as value (for erasure code mode), but it is unclear to me how i would go about setting up minio in distributed mode via the module, if possible at all. Should i write a systemd service which runs after minio to achieve this or is perhaps somebody else running minio in distributed mode on NixOS, i.e. did i miss something obvious?
I assume by distributed mode you mean running minio accross multiple nodes.
It should be possible to add a remote entry like https://minio1.example.net
to dataDir
. But I never tried this to be honest.
Yes, exactly. “distributed mode” appears to be minio terminology.
This is what i also initially assumed, due to the description of the dataDir option, i.e. “Use one path for regular operation and the minimum of 4 endpoints for Erasure Code mode.”
So i assumed by “endpoints” this option would take either a path or a list, but defining everthing apart from a path, or a list of paths, throws (of course, option type = types.listOf types.path;) an error.
For now i have worked around this by not using the module and writing a custom systemd service for this, but just wanted to ask, if i perhaps may have missed something completely obvious…
Furthermore, i read in the release notes for 21.05 the following: “services.minio.dataDir changed type to a list of paths, required for specifying multiple data directories for using with erasure coding…”.
This would then be for running minio in erasure code mode on a single instance, i assume.
Also, and i am not sure about this, but dont think minio mode would accept listing arguments like this. On their documentation it says:
“MinIO requires using expansion notation {x…y} to denote a sequential series of MinIO hosts when creating a server pool. MinIO therefore requires using sequentially-numbered hostnames to represent each minio server process in the deployment.”
As in e.g. “minio server https://minio{1…4}.example.net:9000/mnt/disk{1…4}/minio”, etc.
I think expansion notation is optional you should also be able to use explicit listing of all nodes.
If the types.path
is an issue we could change that in the module.
Yes, that is indeed correct. I just tried this here locally.
Well, either this or i am not smart enough to set the list correctly.
When i set, e.g:
dataDir = [ "http://smb1:9000/d1" "http://smb2:9000/d1" "http://smb3:9000/d1" "http://smb4:9000/d1" ];
i get:
error: A definition for option `services.minio.dataDir."[definition 1-entry 1]"' is not of type `path'.
Does it work if you change the type to types.str
? If yes could you submit a PR to change the type?
I’ll try.
To be completely frank with you, this would be a premiere for me. But i would love to be able to give something back, even if it were for a (perhaps) small change like this one.
Although, i imagine it could take me a while, seeing that i have never done this before and might have a bit of reading to do, but my thinking is kind of: Getting a local copy of nixpkgs, working on my changes there, reading up on the process of submitting a PR to nixpkgs. Is that about right?
And BTW, thank you for your help…
Short answer: Basically yes, but there are errors when switching the configuration… I.e. the systemd tmpfiles creation for config and data directories fails.
The value of dataDir is used by the map function in
tmpfiles.rules = [
"d '${cfg.configDir}' - minio minio - -"
] ++ (map (x: "d '" + x + "' - minio minio - - ") cfg.dataDir);
for the systemd tmpfiles creation unit.
I am unclear on how to implement this functionality correctly in nix. For now, i have created another option, which, if set, skips the merge to systemd tmpfiles. But surely, this cannot be the way to go about this.
I didn’t think about the tmpfiles. I’m also not sure what would be the best way to implement this. We basically need to differentiate if it is a path or not and only iterate trough the paths. I also need to read up if this is possible.
Yes this is exactly what I do. Have a local copy of nixpkgs, modify it and then rebuild based on this copy using nixos-rebuild switch -I nixpkgs=/path/to/nixpkgs
Surely yes, that’s what i’m currently trying to do, instead of checking another option as a condition whether to iterate through the list, seeing if i can use something like builtins.isPath or regex matching. But to be honest, seeing that i am not a nix programmer, it feels immensely wonky what i’m currently trying here, as in, i have no idea if this is even the correct approach to set about things.
I think if you change this to
tmpfiles.rules = [
"d '${cfg.configDir}' - minio minio - -"
] ++ (map (x: "d '" + x + "' - minio minio - - ") (builtins.filter lib.types.path.check cfg.dataDir));
It should work, I haven’t tested it tough as I don’t have minio setup right now.
I put up a PR with the changes, could you give it a try: nixos/minio: allow distributed nodes by bachp · Pull Request #241338 · NixOS/nixpkgs · GitHub
I just tried your changes and it succesfully evaluates, depending on the configuration, appends either the list of endpoints or path(s) correctly in the minio service file and creates the correct entries in tmpfiles, as far as i can tell.
Well, thanks a bunch for your help/work, Pascal.
If there’s anything else i can do for you (while i have the env with your changes still up), please let me know, else i am going to mark this thread as solved then, if this is ok with you…