Hello `std` - A DevOps framewok for the SDLC with the power of Nix and Flakes

Now also with flake-parts integration:

https://flake.parts/options/std.html

More Integrations (see integration badges in the Readme):

  • nix-community/nixago
  • numtide/treefmt
  • numtide/devshell
  • nlewo/nix2container
  • fluidattacks/makes
  • astro/microvm.nix

And via the experimental NixOS adoption (see NixOS story in the Readme):

  • nix-community/disko
  • nix-community/home-manager
  • nix-community/nixos-generators
  • zhaofengli/colmena
  • planned: numtide/nixos-remote
outdated

… is getting out of its cocoon.

Highlights:

  • Keeps flake based nix projects organized
  • Provides a TUI to easily discover & explore a stdized repository
  • Nudges you to set up documentation right from the start and keep an Architecture Decision Record

To Dos:

Tagged today: v22-04-24

Originally inspired by: magrathea « tools - depot - monorepo for the virus lounge


15 Likes

I’m specifically keen on feedback w.r.t. docs improvements. :grinning_face_with_smiling_eyes:

Let’s say the current docs are still very cost-efficient on its producer.

1 Like

I’m specifically keen on feedback w.r.t. docs improvements.

After reading the README and skimming the comments in the source, I am a bit lost and confused. In no particular order:

  • What problem does this solve? How does it solve the problem?
  • What are cells, organelles and clades? These concepts are used but not really defined.
  • So, there is a CLI tool (std). But how do I run it? And what does it do?
  • What does it mean that GitHub and Docker are targets? Aren‘t GitHub and Docker very different things? Maybe Standard can help me build Docker containers? (Does it?) But what does it mean to target GitHub?
  • One of the example projects, The Hive, does not have a README, and instead tells me to read the source code. That‘s really unhelpful when I am trying to understand Standard.
  • “Related code is grouped into Cells that can be composed together to form any functionality you can imagine.” This is a very vague sentence. And could this not also describe a flake? What is the difference between a cell and a flake?

Also, Standard is really not a great name when you want to use a search engine. And have you tried searching for the term std?

I hope this does not sound too negative. You have probably put a lot of thought and work into this. I am just confused, and genuinely do not understand what Standard is and does.

Edit: Added on more point above about cells and flakes.

12 Likes

My first impression after I read the initial announcement, has been “Wow, after FU and FU+, we now get FU++?”

And the impression remains, even today after skimming post and README again.

Can you please share if my impression is correct, and then what the improvements are, or in case my impression is incorrect, be more specific about what std actually is?

And I will not even complaint about the searchability/findability of that name…

3 Likes

So this maybe solve some of the problems i have, but after reading the readme, i still have no idea what it is ? A framework for flakes ? To solve which problems ? Cells ? Clades ? Organelle ? wat ?

It can output … some some helpers for nix stuff. Or a built thing ? Or… some release with some scripts ? Am i right ? What is this trying to do ?

3 Likes

I’m very thankful for the feedback! This is precisely what I was hoping for.

Definitely will take into account for the upcoming iterations and weave it into the docs.

  • clearer (more specific) problem statememt
  • glossary (including targets and the relatively new actions)
  • better example projects (coming soon, some showing full capacity are private atm)
  • difference to flake

I might get on the last point quickly, here:

Flakes is great for simple projects, but for large polyglot projects, you start to observe interesting but completely needless & complex variations in boilerplate. Maybe that’s the spell of a turing complete lazy functional language. :grinning:

The current flake output schema is deficient.

The handling of system is unnecessary intrusive and complex which hinders adoption.

nix, the CLI is not particular user friendly, even after the refactoring. Too many bells and whistles.

Edit: I may add another problem. nix the CLI, NixOS the distribución and nix the language are indistinguible for most users. nix CLI → std TUI. Problem solved. Case closed.

2 Likes

This still reads as “Flake Utils Plus Plus”, is this correct?

Why should I choose std over any of FU or FU+ then?

Why should I at all use a framework that does magic and indirection and probably opinionated stuff beyond my control over some simple packages.x86_64.foo = mkDerivation …;?

I do not see, how a flake framework can help with polyglottal mono-repos, as for polyglottal projects, the problems usually exist in the derivations, not in the flake.

5 Likes

This still reads as “Flake Utils Plus Plus”, is this correct?

I would say that interpretation is within reasonable range. It wouldn’t be my first association, but I think it’s a valid one, depending on what the last “Plus” might stand for. :grinning_face_with_smiling_eyes:

Why should I choose std over any of FU or FU+ then? (and also next question)

I recently described it in a presentation as “shared services” for the repository. If a particular context seems to make “shared services” for a repository an attractive proposition, then std is worth a try.
Like any opinionated framework, the value of the choices made for you must be higher than you making these choices.

I do not see, how a flake framework can help with polyglottal mono-repos, as for polyglottal projects, the problems usually exist in the derivations, not in the flake.

I think personally, that derivation-wise, potentially dream2nix is the correct approach. std serves as the orderly lingua franca in between as an interface to all sorts of language native repository drivers.

I may be able to help here after actually recently converting a work project over to use a recent iteration of std. I think the first thing to note is that this is still very much a WIP, but the plan is to eventually reach a 1.0 and version the schema accordingly from there. One thing we are still lacking is a fully functional cli interface, which would essentially replicate the functionality of the tui in non-interactive settings such as CI.

The problem being solved here is largely one of organization. At IOG for example we have tons of projects making use of Nix, but the status quo has become one where each individual project more and more began to deviate to it’s own structure, to the point where those of us on the SRE team could become very confused when switching from project to project.

Much better to have a standard (hence the name) project structure where things like discoverability and documentation become trivial, and switching from project to project is less burdensome. Also, having documentation directly available in the tui is a good UX for newcomers who just want to know what they can do with the project.

Already I am able to do things like build and upload OCI images of some of our deployment packages to our private registry to be deployed on cluster, or build a job description for a Nomad cluster and run it.

Soon we will have integration with our experimental in-house CI platform and I can trivially replicate CI runs locally. Actually, I can already replicate CI runs locally even if just using more “traditional” CI systems like GitHub actions.

I percieve another possible goal of the project as potentially putting pressure on upstream Nix itself to hopefully improve UX further for all of us.

Perhaps as a one off structure for small projects, it isn’t horribly useful, but if you’ve got to manage several projects at once, it can become quite invaluable to have a sane and well organized structure. And for onboarding newbies who may not even know much about Nix at all, it can also be useful, at least that’s the hope. Like I said, still in early stages, but already useful for my sanity personally.

8 Likes

This quote stuck, and I thought a little bit about it.

Maybe this is a truely emerging property of GitOps. In GitOps, a repository becomes essentially a declarative job spec for all sorts of things that you can do.

So yes, traditionally, you would ask for the repo’s artifacts, the runnable / installable. nix, as a package manager combined with the traditional flake output scheme is somehow made-to-mesure for that use case.

If that’s your business, I don’t think std might bring a ton of value.

But if your business is nix-flavored GitOps, then I’m confident that std is for you.

1 Like

Here is a decent, yet nascent, example of an interpretación de std GitHub - IntersectMBO/cardano-world: Cardano world provides preprod and preview cardano networks, configuration documentation and miscellaneous automation.

Today I worked on a little doc overhaul.

In its current state, it claims to be absolutely perfect and without any flaws, ofc. :grinning_face_with_smiling_eyes:

https://divnix.github.io/std/

1 Like

I did not even understood this, and i think that all my projects are GitOps…

I feel like i am missing essential context to understand what this do :slight_smile:

std is mostly a mechanism of improving the UX of using Nix for people unfamiliar, while at the same time providing an opionated API that can be understood at a glance of the projects flake.nix. In addition, it attempts to limit the scope of each individual cell by not offering a full reference to self, but instead expecting you apply combinators over other cells in the repo or from external stdized projects providing a clean and clear code boundary.

it is a fairly major deviation from the usual way of writing Nix code and may be a bit confusing at first for experienced Nixers because of this, myself included. Once you get used to the foundational concepts though, it’s actually pretty nice for keeping the code boundaries clear and organized, as well as defining a clear API that is easily grasped at a simple glance at the specified organelles.

The addition of yants also provides a nice way to ensure that every organelle is what it claims to be, type wise.

2 Likes

yeah did not understood this at all either, i am probably missing something essential in the context ^^’

I’ll try to use more concrete terms…

A cell is like a module of code consisting of smaller sub-modules called “organelles”. An organelle is a well typed Nix value whose type is verified at eval time. There are many organelle types available, from “data” for things that would render to JSON or another data format, to “runnables” as they are called, basically something that can actually be executed. “functions” are reusable Nix functions and “devshells” are self explanatory.

Cells can reference other cells from within the same repo, or cells from external flake inputs which also implement a std api. Cells cannot reference the flake self indiscriminately, and this is by design to avoid crossing otherwise well defined boundaries between the various cells and their various organelles.

A cell is really just a directory containing organelles. An organelle can be either a “named.nix” file or a directory “named/default.nix”. Organelles are only exposed if they are listed in the API spec. This spec is where we define what types the organelles inside the cell should be. So in the linked example, if I make a directory cells/someCell, according to the api spec I can them make an organelle cells/someCell/lib.nix which should contain an attribute set containing only Nix functions.

Every organelle takes an attribute set as an argument with exactly two values { inputs, cell } and returns another attribute set of values whose type much match what was specified in the API. inputs is a reference to the flake inputs, excluding self. You can still access the sourceInfo of self from here, but that’s it, which is what I meant about enforcing code boundaries. One cannot just access self.packages.whatever, instead one must access a given package through it’s proper stdized API.

If the inputs implement std as well, their cells can be accessed without having to explicitly specify which system to use like so: inputs.someFlake.someCell.someOrganelle, whichever system is currently being evaluated will be assumed.

cell is a reference to the current cell being defined in order to access other organelles from the same cell, and inputs.cells is a reference to any other cells that exist in the same repository.

In this way, it is possible to define and maintain clear code boundaries of well typed values, and a clear separation of concerns. You can still write Nix values however you want inside the organelle directories so long as they conform to the type spec in the end.

1 Like

I’ve try to grab std trough shared examples but I feel they are much
internal or complicated.

Maybe adding a simple example for a typical GitOps workflow could work
as documentation.

For instance: I would like to export my AWS key and then run std TUI
to deploy a simple cloud webserver (using terraform).

Good point! In fact, some sort of terraform clade is high on the list that would exhibit a set of standard actions such as:

  • print-to-file (similar to the data clade)
  • plan
  • apply
  • passthrough (to use all the other terraform commands)

There is an advanced pattern where we started to store encrypted tf state into a dedicated tf branch with unrelated history, but that’s probably a little bit tooo opinionanted even for std.

Similar a colmena clade is high on the list, but to make it beautiful, I depend on upstream to merge some PRs I’ve submitted. I’d like to wait on these before continuing to explore a colmena clade.

Another thing, that I’m considering is to actually consume the DAG feature between all std targets that currently is already exposed and can be consumed from the metadata, but the TUI/CLI doesn’t yet know how to react to it.

But having a DAG feature would make it possible to depend one runnable on another runnable which has been taught to load credentials into the execution pipeline’s environment. There are still design decisions to be made on this, though.

Although, I’d intuitively use a numtide/devshell to load env variables when I enter it.

All in all, this project still is evolving, although somewhat stabilizing and I have to admit that documentation wise, a long way lies ahead.

I think it is wise to battle test it alongside in practical use, so the various efforts that flow into it’s development need somehow be balanced through tradeoff.

2 Likes

Im designing now a greenfield terraform project and I’m curious about your git encrypted state way of doing things. Is it maybe a workflow based on this https://github.com/plumber-cd/terraform-backend-git ? How do you manage lock?

1 Like

So a cell is a plugin and organelles are functions? And std makes sure everything is in expected places and glues everything together?