Low-effort and high-impact tasks

PS: My general assumption about “low-effort high-impact” is, that any such task that might exist and is know to someone would already have been implemented or it is hold back as it was a breaking change/would make future changes harder (eg. releasing flakes today). Or the only person that knows about it has no clue about the ecosystem and conclude it being “high-effort” and keep it for themselves. Therefore, I consider questions that leave the effort open or ask “what would you do if you had unlimited ressources” much more valuable, as those might reach the people that estimated the effort wrong.

Of course: the middle ground is covered by neither of these questions.

1 Like

This is not how effective standardization works in practice - it’s how you get “design by committee”-style standards that fit real-world applications poorly. It’s very rare for a problem domain to be fully understood right from the start, and often the nuances and peculiarities of a problem are only discovered in the process of solving it and gathering user feedback.

That means there’s roughly two realistic options for standardizing things:

  1. Standardize upfront, discover it doesn’t work, have everything diverge in practice because the standard doesn’t actually solve their problem, and then invent a new standard that solves these issues. In this process, those diverging from the standard will likely have to expend a lot of effort to defend their divergence from the broken standard, because it is viewed as “official”. This likely creates a lot of friction and negative feelings towards others in the process.
  2. Experiment first, to figure out what works in practice, then over time converge the implementations into a standard that covers all the actual needs that people have. This is a bit messy implementation-wise at the start, but eventually can meet all needs.

One way or another, you’re going to be standardizing after experimentation, that’s just fundamental to exploring a problem domain. The only difference is whether you also introduce an additional painful and high-conflict standardization process before experimentation, which is what happens when you try to standardize right away.

There’s a reason that pretty much every effective standards process in the wild has an experimentation period baked into the process, where implementations are allowed to diverge or become incompatible over time, up until the spec is finalized.

1 Like

There is a lot of stuff that is probably “low-effort” for people implicated a lot in Nix that can be seen by people that are not … but they do not know how to do it. Also low-effort does not mean that the people that think about can give any effort. My current time for any kind of FOSS work is… 2h per month. That limits a lot what i can do…

That does not mean i cannot contribute ideas.

4 Likes

If I had to choose one, it would be Empowering Nix Teams. Will write up a more expanded post soon!
Albeit a bit on the larger end of the scale but it can be partitioned to be a series of smaller items that tackle a few key areas that I think we can improve across our community and ecosystem.
The Idea? - Set up a defined model for teams in the Nix ecosystem. Proposing to create a 3 tier model (Critical, Formal, and Community). The tier a team is in should be directly connected to how critical is it in the Nix ecosystem. For example, removing the bus factor.
The Value? - Nix can successfully continue scaling and running smoothly.

  • Both newcomers and veterans will have more visibility into the teams, work being done, and prioritizations. Enabling further contributions, collaboration and learning.
  • Support new Nix user growth by presenting a more reliable front when they consider moving to Nix!
  • Recognize the teams that are core to Nix, support them more formally and uphold a certain standard to make sure they continuously operate in an optimized manner. (for example, validate removal of bus factor)
  • Empower non-core teams to collaborate and “do”.
  • Create a process where non-core teams can formalize, receive more support and have more accountability/responsibility

How is this not huge?
We can (and in someway are) starting this in a gradual manner. Kick off with one team, learn, expand to more teams and iterate.

4 Likes

this is a very good idea

Sometimes, it’s a case of picking a particular focus or use-case, and making a series of small coordinated changes towards a goal.

My suggestion for today: improve the output from a nixos-rebuild

  • Currently, there is a lot of noise. The verbosity of output is quite inconsistent between different actions (for example, lots of output for path substitutions).
  • Some things output essentially debug / trace level detail.
  • There are things that look like warnings, or are even explicitly worded as warnings, but they don’t actually seem to be a problem?

The result is a little overwhelming, potentially concerning for a new user, and largely pointless. Should one pay attention to all this output? Should one worry about those warnings? What is being missed once we all decide not to pay attention to them because things seem to work anyway? It’s hard to see what’s going on and where things are up to.

There are several causes:

  • all kinds of tools that are invoked as part of builds, and some generate more verbose output than others
  • there are various “doing this”, “doing that” progress prints emitted
  • the messages from the nix build itself (packages, and higher-level process stages) don’t stand out in any way
  • parallel threads mix output from all the above together

Each of these causes could use some low-effort improvements, and they can make a big impact especially when combined. Some possible suggestions, in no particular order:

  • Using some bold / colour / dim output (to an interactive terminal) for nix progress, inter- vs intra-package stages, and build-tool output
  • Review and cleanup of unnecessary messages and verbosity options
  • Being clearer on which messages need to be logged to build outputs (in case of investigation) vs displayed to the console (for informative interactive value)
  • adding some message prefixes and common line formatting so there’s more chance of seeing where interleaved messages come from. This can convey progress too where it makes sense (such as done/in-progress/total derivation build counts)

Edit/addendum: I forgot to explicitly state that this is a theme of such tasks, and the potentially nice thing about it is that there are lots of separate parts in different tools, where different contributors can make tweaks to one or another tool according to interests or skill.

7 Likes

This is already the case. Each package built has a ${drvname}> at the beginning of the line. If I actually have build outputs enabled (which they are not by default)

There is a progress indicator that tells me how many drvs currently are building, already built and have to be build overall. The line also indicates size and progress of pathes being substituted in terms of download and disk size.

1 Like

If we’re going to discuss output, we have to clarify whether we’re talking about the new CLI or the old one. They do things quite differently. I feel that the new CLI is decent enough in this regard, even when -L is used, though the old one is indeed a mess.

1 Like

There is no “new” and “old” nixos-rebuild CLI, there is just nixos-rebuild. Therefore I am not sure what you want to distinguish between here?

nixos-rebuild uses the new experimental interface when your config is a flake, and the old one otherwise.

I was referring to what I see when running nixos-rebuild switch --upgrade, explicitly.

There’s definitely more of this kind of stuff when I enter a flake devshell via direnv.

If this is coming, or at least already partly there: great! I hope this means the tasks necessary to surface it better are indeed simpler!

Oh, I wasn’t aware that the progress is not there if you do not use a flake configuration…

1 Like

The --upgrade tells me, that you are probably not using flake. Then as @tejing pointed out, what you see is different from what we see.

1 Like

Not for system configs, there hasn’t (yet) been a sufficient need or motivation. This could be one…

1 Like

You can pipe your non-flakes and flakes build to nix-output-monitor for a great progress meter :slight_smile:

it shows a dependency graph and average time for each step, current time, downloaded packages etc…

4 Likes

Oh, it does work for flakes now? I need to take a closer look!

2 Likes

it does work with flakes :+1:t3:
it produces a much nicer output than flakes :smiley:

1 Like

Is it doable to release an alpha version NixOS with the new features as default ?

What do you mean? Which features?

Any new feature that would indeed be available in the next release, is available by default when using nixos-unstable.

There you can use them if you need them.

Not as much projects/tasks but areas to explore that I suspect have a lot of low hanging fruits:

  • Better (any?) tracking for CVEs and security vulnerabilities remediation. ckauhaus used to do the vulnerability roundup bugs every once in a while, but as far as I can tell that’s not happening anymore and there was a ton of space to improve the process (tracking, figuring how to handle false positives better, making things more realtime, etc.).

  • More automation to help maintainers keep their packages up to date and building successfully. ryantm’s bot is great, but as far as I can tell when it fails to create an auto-update PR that still builds it just ignores the problem without telling anyone! I’d much rather get an email/bug saying “there’s a new version but it needs manual work to update, have a look when you can” (in fact I’ve built myself some automation to do exactly that…). Similarly, when one of the packages I maintain is broken on Hydra I might not notice until the next time I update my systems, leading to things being broken for longer than they should.

  • It might be useful to have something to identify “important packages lacking maintainers” (easy), as well as “important packages that have maintainers on paper but the maintainers don’t actively maintain them anymore” (harder). Someone with access to cache.nixos.org stats could probably figure out “important packages” based on download stats – it’s not a perfect heuristic but likely good enough. I’d happily adopt some of these packages! (I do that regularly, but usually when I notice it’s because the packages in question got involved in annoying me with bugs / being outdated / … and I’d rather we just skip the annoyance part :slight_smile: ).

3 Likes