Nixpkgs-update/r-ryantm logs


Thanks for the precise information.
Will try to give a hand. For example, in the below case, r-ryantm just needs to be told that the 2 in davfs2 is not linked to the version number. There could be a davfs2 v0.4 as well as a v3.6, right ?

2018-11-10T11:21:29 davfs2 1.5.3 -> 1.5.4
2018-11-10T11:21:37 Version in attr path davfs2 not compatible with 1.5.4
2018-11-10T11:21:37 FAIL

I am not that much into fixing packages by hand, but I would be glad to try to make r-ryantm smarter ;-).
I also like the idea of fixing download urls seems like a good idea, because it is a medium-term fix.


We could add a list of packages to ignore the version attrpath compatibilty check. The general solution to this is to make all package names conform to an unambiguous convention like I propose in


Thanks for pointing this out! It’s fixed and it looks like there were a bunch of false positives on that version incompatibility check.


There is definitely too much in these logs. I wrote a quick script to split them at

There is a lot of work to do with prefetch not working (761 cases), there is a weird number of successes (217) and a expected number of failed builds (362).

I think I will start with prefetch issues.


Here’s the logs from the last few days:

2018-11-17 log of nixpkgs-update/r-ryantm runs


Great, I was able to run your script inside nix-shell -p parallel by renaming the log file to r-ryantm.

Please note I slightly changed the log format toward the end of this log. I put the failure message on the same line as FAIL. I haven’t spent the time to think of a good format for the log messages, if you have some ideas I’d be happy to hear them.


@ryantm Are more recent logs available anywhere? I came to have a look to see if I could help at all this afternoon but couldn’t see where the logs from recent runs were.


@mpickering Thanks for helping out! The logs are on my computer :). I haven’t been uploading them lately because the current code hasn’t been able to make it reliably through a complete run. That said, old logs should still be a source of work that needs doing, because most of the errors will only be fixed by human intervention (and it should be easy to see if someone already fixed it).

Here’s the latest partial run log: 20181215_ups.log


@ryantm: maybe nixpkgs-update could the logs as well either in a web server with directory listing or something else simple like git with lfs-enabled. What would also help for parsing, if packages are seperated by two newlines, and compile ouput would never contain more then one newline.






Happy new year! Here’s the 20190104 log


20190106 log


Is there any chance to get something like dub ( updated? I guess the main problem there is that there are multiple derivations in one file but simply splitting it up in different files isn’t going to work because the main derivation which is now at the bottom doesn’t even have a version or a fetcher.


The issue with dub is not that it has multiple derivations. The issue is that the main derivation does not have a src attribute, which it checks for to make sure the changes it makes to the file actually update the sources for the main derivation. If you moved the src definition to the let block, and inherited it into the main derivation without it being used for anything, it would probably work. But it’s not so great to have the updater imposing something weird like that, so we should try to think of a better way.


Got a couple of questions:

  • how often is the fetch from Repology performed?
  • I created an alias in Repology from solc to solidity (because that’s what most distributions use). nixpkgs uses the solc name - will nixpkgs-update figure this out and detect the new version?

I ask this because there’s been a new solc release that I haven’t seen in nixpkgs yet.


I manually run it every couple weeks or so. I’d like to automate it but I’m a little scared to spam everyone with low quality PRs.

I expect this to work because it uses the name of the package from the nixpkgs JSON dump returned by the repology API.


I think it would be great if it could be run daily or even hourly - there’s no reason why humans should do the trivial work :).

Have you seen many low-quality PRs produced by it? And what consitutes a low-quality PR in this case?


It takes it 2 to 3 days to get through all the packages as is. This could be sped up drastically by maintaining a database of failed outPaths.

I’m just worried that the bot might not have enough checks in place that it submits some nonsense PRs. I think there is some aspect that nixpkgs-update is introducing extra work for maintainers, because it is catching all the updates from hyper-actively updated packages, and I want to keep the overall quality of nixpkgs-update PRs very high so they are easy to merge and to trust that they will be good.

Another reason I like to manually monitor it is that I am usually developing nixpkgs-update and I don’t have much of a test-suite, so if something goes wrong, I like to catch it through manual monitoring. If I work on improving the test suite, I think I’ll feel much more comfortable making it automatically run continuously.


Thanks, all of that sounds very reasonable.

I wonder if it could automatically close (or if not privileged enough, label as stale somehow) PRs that are superseded by a newer one. I understand that this would be adding a lot of responsibility and logic into it though.

But I think that this 's an extremely valuable tool that is showing the way we need to go to make nixpkgs scale better.