I’ve filed an issue in the Nix repository suggesting that an AI usage policy be adopted: Add AI usage policy to CONTRIBUTING.md · Issue #15340 · NixOS/nix · GitHub , but I think this could also apply to nixpkgs. Wanted to solicit feedback from discourse on the idea, since just filing an issue in Nix won’t get much visibility, and this is something that should have a lot of community input.
5 Likes
I do too.
That said, policies like this are only effective f we assume that the contributor is benevolent. For those who are not, they can simply ignore the policy.
The question then becomes how do you enforce it? How do you distinguish between an AI PR and one from a human?
2 Likes
I think it is OK to assume that malevolent actors cruising for repos to boost their open source cred (or whatever their motivations are) are probably not interested in contributing to Nix or nixpkgs, and would target higher-profile projects . I think that most of the people who would be opening AI-assisted PRs in Nix or nixpkgs have some actual vested interest in what they’re submitting, and are thus probably not evil, but possibly naive.
1 Like
vcunat
February 28, 2026, 5:45pm
5
Require a “human in the loop” (no fully automated PRs, must completely understand the generated output and be able to explain it)
Note that a huge fraction of nixpkgs commits intentionally come from… PRs created by a fully automated bot which is even way dumber than the AIs you’re mentioning. And it’s a good thing. Just an example.
I personally don’t think that contributors need a policy to close PRs and issues which don’t make sense. Even if they’re fully human-produced. It’s not like I’m really against having a policy about this, but I’m not so sure about the practical impact.
1 Like
The bot is only effective because of scripts that people actually wrote, though. Unlike AI. r-ryantm can only produce code that actually evals.
6 Likes
I would hope that it would be apparent (but perhaps not) that vetted, approved internal infrastructure would not be beholden to the “no fully automatic PRs” rule, and that this would instead apply to non-maintainer contributors. But it may be worth some verbiage in the policy clarifying this so people are dissuaded from trying to compare their automated slop PR to a piece of core infrastructure.
5 Likes
OpenSSF’s vulnerability disclosure working group is also working on an AI usage policy, albeit targeted more at vulnerabilities, but still relevant:
opened 04:21PM - 04 Feb 26 UTC
Open source projects are increasingly facing a wave of low-quality, AI-generated… vulnerability reports and contributions—commonly referred to as "AI-slop." This issue aims to develop **best current practices for open source maintainers** to help them detect, manage, and mitigate the impact of AI-slop on their projects while still benefiting from legitimate AI-assisted security research.
## Problem Statement
The rise of AI tools has created a significant challenge for open source maintainers:
- **Volume & Quality Issues**: Projects are receiving high volumes of low-quality vulnerability reports that appear to be generated by AI with minimal or no human review, creating a "DDoS-like situation" for maintainers.
- **Maintainer Burden**: Validating these reports can take significant time, consuming valuable volunteer time and resources. Halfway through 2025, [curl reported](https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-slops/
) that only ~5% of bug bounty submissions were genuine vulnerabilities, with around 20% appearing to be AI-generated slop.
- **Bug Bounty Impact**: Some projects have been forced to discontinue bug bounty programs entirely (e.g., [curl ended their bug bounty program in January 2026](https://github.com/curl/curl/pull/20312)), while others like [Node.js have implemented stricter signal requirements on HackerOne](https://nodejs.org/en/blog/announcements/hackerone-signal-requirement).
- **Detection Difficulty**: There is no reliable technical indicator for AI-generated content: detection is often based on "vibes" and maintainer intuition.
- **Burnout & Mental Health**: The constant stream of low-quality reports contributes to stress, frustration, and burnout - especially for unpaid volunteer maintainers. Node.js mentioned receiving over 30 AI-slop reports _during major holidays for the maintainers_ as a key reason for raising their H1 signal requirements.
- **Social Pressure**: Maintainers who reject AI-slop reports may face personal attacks and pushback.
## Goals
1. **Document the Problem**: Collect and (where possible) anonymize data on the scope and impact of AI-slop across the open source ecosystem.
2. **Develop Detection Guidance**: Provide recommendations on identifying potential AI-generated submissions, acknowledging that detection is imperfect.
3. **Create Policy Templates**: Develop example AI contribution policies that projects can adapt, inspired by existing efforts.
4. **Best Practices for Maintainers**: Provide actionable guidance maintainers can reference to reduce personal attacks and provide consistent responses.
5. **Balance Good vs. Bad AI Use**: Acknowledge that AI tools *can* find valid vulnerabilities. _The goal is to reduce slop, not ban AI entirely._
## Key Themes from Existing Public Discussions
### What Projects Are Doing
| Approach | Examples |
|----------|----------|
| Ending bug bounties | [curl/curl#20312](https://github.com/curl/curl/pull/20312) |
| Requiring higher HackerOne signal | [Node.js announcement](https://nodejs.org/en/blog/announcements/hackerone-signal-requirement) |
| AI contribution policies | [LLVM](https://llvm.org/docs/AIToolPolicy.html), [Selenium#17043](https://github.com/SeleniumHQ/selenium/pull/17043), [Django](https://github.com/django/django/commit/0f60102444d8a2cfb662a7b11b3911b52567ee54) |
| Requiring PoC videos | Various projects |
| Banning repeat offenders | Under discussion |
| Cataloging slop examples | [curl AI slop gist](https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d1cd) |
### Policy Elements from Existing Projects
Key principles emerging from LLVM, Selenium, and Django policies:
- **Human-in-the-loop accountability**: A human must review, understand, and be able to explain all AI-generated content
- **Disclosure requirements**: Substantial AI assistance should be disclosed (tool used, what was generated)
- **No autonomous agents**: AI tools should not autonomously open PRs or push commits
- **Quality bar unchanged**: AI-assisted contributions must still meet the same standards
- **Contributor remains responsible**: Copyright and quality responsibility remains with the human contributor
- **"Good first issues" protection**: AI tools should not be used for issues meant to help humans learn the project
### Recommendations for Platforms
Platforms accepting vulnerability reports should consider:
- Implement systems to prevent automated or abusive reporting (CAPTCHAs, rate limits, etc.)
- Allow for public visibility of reports without labeling them as vulnerabilities
- Enable community feedback mechanisms for low-quality reporters
- Remove credit for abusive reporters
- Strongly encourage that only thoroughly reviewed, human-verified reports be submitted
- What else?
### Open Questions
- How do we survey the community on AI-slop impact?
- What tools can help flag probability of AI-generated content?
- How can we make project documentation "LLM-friendly" to reduce false positives (e.g., explicit threat models, scope definitions)?
- How do we help security researchers who find valid bugs but may not be qualified to create patches?
- How do we distinguish "yesterday's problem" (current slop) from "tomorrow's problem" (increasing AI coding assistance)?
## Related Efforts
- **OpenSSF AI/ML Working Group**: Has AI security on their roadmap—potential collaboration opportunity
- **DARPA/ARPAH AI Hacking Competition**: Tools being donated could help both researchers create better reports and projects analyze submissions
- **Cyber Reasoning SIG**: Working on leveraging DARPA tooling for finding vulnerabilities AND generating patches
- **FOSDEM 2026**: [OSS in Spite of AI talk](https://fosdem.org/2026/schedule/event/B7YKQ7-oss-in-spite-of-ai/) and related [GVIP Summit session](https://www.gvip-project.org/summit01/agenda/#aislop)
## Proposed Deliverables
1. **Best Practices Document**: A guide maintainers can reference when setting policies and responding to AI-slop
2. **Policy Template(s)**: Adaptable templates for AI contribution/disclosure policies
3. **Community Survey**: Coordinated with AI/ML WG to gather data on impact
4. **Blog Post**: Q2 vulnerability coordination theme—solicit community involvement
## How to Contribute
We welcome input from:
- Open source maintainers who have experienced AI-slop
- Security researchers (including those using AI tools productively)
- Bug bounty platform representatives
- AI/ML security experts
Please share:
- Examples of AI-slop patterns you've observed
- Policies or approaches that have worked for your project
- Ideas for detection or mitigation strategies
- Data you can share (anonymized if needed) on the scope of the problem
## References
### Blog Posts & Articles
- [Death by a thousand slops](https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-slops/) - Daniel Stenberg (curl)
- [New era of slop security reports for open source](https://sethmlarson.dev/slop-security-reports) - Seth Larson (Python)
- [The end of the curl bug bounty](https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-bug-bounty/) - Daniel Stenberg
### Project Policies & Changes
- [LLVM AI Tool Use Policy](https://llvm.org/docs/AIToolPolicy.html)
- [curl bug bounty discontinuation PR](https://github.com/curl/curl/pull/20312)
- [Selenium AI-assisted contribution policy PR](https://github.com/SeleniumHQ/selenium/pull/17043)
- [Django AI disclosure requirement](https://github.com/django/django/commit/0f60102444d8a2cfb662a7b11b3911b52567ee54)
- [Node.js HackerOne Signal Requirement](https://nodejs.org/en/blog/announcements/hackerone-signal-requirement)
### Examples & Data
- [AI slop security reports submitted to curl (gist)](https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d1cd)
### Talks & Events
- [FOSDEM 2026: OSS in Spite of AI](https://fosdem.org/2026/schedule/event/B7YKQ7-oss-in-spite-of-ai/)
- [GVIP Summit AI-Slop Session](https://www.gvip-project.org/summit01/agenda/#aislop)