The future of the vulnerability roundups

Hi everyone,

as you may possibly have noticed, I’m currently unable to continue the vulnerability roundups. With the current state of tooling, they are just too much manual work which I cannot provide anymore in my current private and professional situation.

How to proceed? To get the vulnerability roundups back in a sustainable manner, a significant amount of engineering would be needed. A big problem is vulnix’ matching algorithm, which produces way too much false positives (and quite a few false negatives BTW).

But before we go into technical details, let me ask a general questions: Are the vulnerability roundups still necessary at all? There is currently a small group of folks doing most of the security fixes which are not dependent on the vulnerability roundups. On the other hand, the vulnerability roundups created lots of GitHub issues which need to be managed.

If the community decides to continue the roundups (and someone stands up as lead), the perhaps biggest step forward that should be taken is to introduce a explicit mapping between Nix packages and CPE identifiers. I see that for example in FreeBSD ports where the CPE identifier is directly recorded in each package’s metadata.

But let us not take the second step before the first one. I’d glad to hear your comments on this topic. I would also be glad to continue discussion in-person during NixConf. I’ll be attending.

Cheers, Christian

13 Likes

More serious vulnerability scanning for nixpkgs is something I’ve wanted for a long while.

I’ll also be at NixConf for any in-person discussion.
Thanks for the work so far :+1:

5 Likes

I don’t have concrete suggestions to offer but I wanted to say thanks for all the relentless work in maintaining the vulnerability roundups. I hope something less toilsome can be found, because having reliable vulnerability scanning of my systems is a very important feature for NixOS.

7 Likes

First up: Thank you. You put a lot of effort into this, especially the fallout in the form of Issues created, maintained and closed on GitHub.

Second: I don’t think the current workflow is maintainable. We need a better technical foundation, with CPE-mapping being a good start, but also some form of patch-detection to reduce false positives, like OpenEmbedded and cvecheck have implemented.
On the UI-side I’d rather have some dashboard. Plus using the power of Nix: An overlay that sets the known-vulnerabilities meta-attribute.

Back to the threat topic: Thank you again, farewell roundups, but keep pestering people that they need to come up with a better replacement until yesterday!

2 Likes

Hi everyone, I’d like to give a summary on the state of discussion here @ NixCon. Disclaimer: Discussion is still in progress. But I’d like to share at what we have arrived so far to get opinions from the broader community.

General thoughts

  • We do not necessarily need to maintain our own scanning engine. There quite a few scanners around e.g., Trivy or Grype. Focus should be on providing data as a software bill of materials (SBOM) in a compatible format like in CycloneDX or SPDX) and make the actual scanner pluggable.
  • We should definitely not try to maintain our own vulnerability database (like DSA) because we don’t have the manpower to do that. We should rather rely on scanners combining several existing data sources like NVD or OSV.
  • Identifying packages is hard. There are many formats around, each with their individual strengths and weaknesses: CPE, purl, SWID, OSV ecosystem/name, just to name a few. CPE seems to be the most widely used, but is particularly poorly suited for open source ecosystems.

What needs to be done to get scanning done

  1. A decent SBOM generator which scans some Nix artifact (see below) and produces an SBOM containing package identifiers in an acceptable quality.
    1.1 The SBOM generator should be able to scan a top-level expression (anything which produces a result symlink when run)
    1.2 The SBOM generator should be able to perform a static scan of nixpkgs (without requiring to instantiate anything) and emit one or many SBOMs for distribution-wide scanning.
    Requirements (1.1) and (1.2) might result in different operating modes of the same piece of software or different pieces of software.
  2. In cases where there is no fixed upstream version available or updating is not a good option, we need to provide a list of applied patches so vulnerability scanners know about the fix. Don’t know which format is appropriate. Some people suggest OVAL, but that looks ugly to me.

SBOM and fix list should be generated automatically on Hydra for each relevant architecture and release.

What needs to be done to help maintainers fix stuff

Should we continue to create GitHub issues or not?

  • Pro: This is where developers look anyway.
  • Con: There is little overview of the overall state of the security effort. I also regularly hit GitHub rate limits.

An alternative would be a security dashboard like broken.sh (apparently not up anymore). Perhaps it will be necessary to generate both tickets and provide some dashboard.

Interacting with the security infrastructure must be straight-forward for maintainers. In the best case, no manual action should be necessary at all. If manual action would be necessary (e.g., recording applied fixes), it should be obvious and easy.

It also should be obvious and easy to fix package miscategorizations, e.g. when an pkg is associated to an incorrect purl.

In the past, vulnerabilities often got fixed on the go, e.g. while doing regular package updates. It would be nice to close obsolete issues automatically.

There is probably at lot more about the issue, but I hope this will get everyone started. Please feel free to share your ideas! :slight_smile:

8 Likes

I’ve sent an email to the OSV format authors & I’ll be attending that OpenSSF Vuln Disclosure WG as discussed @ nixcon to hopefully clarify some bits around PURL and package identification

2 Likes

Good idea. When will that meeting take place? You’ve told me, but I didn’t note it so I forgot it… :frowning:

I think it is important not to over-engineer anything but find out what kind of information various scanners really need.

+1
Aside from the scanners, there is https://dependencytrack.org/

Ideally, this would be an output of the derivation, but…

…this hints rather at a meta attribute.

like meta.appliedPatches plus scanning of the patch (file) names

Looking at Yocto again, it can produce a SBOM and .cve files for an image. That should be our benchmark.

Probably to much noise and housekeeping.

A dashboard similar to https://zh.fail or Nix Review Tools Reports | Automatically generated reports for Hydra jobsets using nix-review-tools. Updated hourly. that directly leads back to the source should be very helpful.

2 Likes

Another measure we could take is to semi-automatically add meta.knownVulnerabilities. Updates to the package are then expected to remove the fixed vulns again.

Yes, but in the form of an overlay hosted outside of nixpkgs. Otherwise we would get a lot of PR noise and race conditions between merging the knownVulnerabilities PR and the actual fix.
Hydra or paranoid users can then add the overlay but the pain is kept away from regular users.

I realize this is an old discussion, but I didn’t find anything more recent on this topic.

Here’s one tool we wrote recently for something similar: GitHub - tiiuae/sbomnix: sbomnix is a utility that generates SBOMs from nix packages

Did this ever lead anywhere?
Have there been any discussion as to what it would require to add nix ecosystem support to OSV: https://osv.dev/list

3 Likes
Hosted by Flying Circus.