Proposal to have an AI usage policy

I’ve filed an issue in the Nix repository suggesting that an AI usage policy be adopted: Add AI usage policy to CONTRIBUTING.md · Issue #15340 · NixOS/nix · GitHub, but I think this could also apply to nixpkgs. Wanted to solicit feedback from discourse on the idea, since just filing an issue in Nix won’t get much visibility, and this is something that should have a lot of community input.

5 Likes

I do too.

That said, policies like this are only effective f we assume that the contributor is benevolent. For those who are not, they can simply ignore the policy.

The question then becomes how do you enforce it? How do you distinguish between an AI PR and one from a human?

2 Likes

Prior discussion: How do we deal with AI-generated issues? · Issue #410741 · NixOS/nixpkgs · GitHub

5 Likes

I think it is OK to assume that malevolent actors cruising for repos to boost their open source cred (or whatever their motivations are) are probably not interested in contributing to Nix or nixpkgs, and would target higher-profile projects. I think that most of the people who would be opening AI-assisted PRs in Nix or nixpkgs have some actual vested interest in what they’re submitting, and are thus probably not evil, but possibly naive.

1 Like

Require a “human in the loop” (no fully automated PRs, must completely understand the generated output and be able to explain it)

Note that a huge fraction of nixpkgs commits intentionally come from… PRs created by a fully automated bot which is even way dumber than the AIs you’re mentioning. And it’s a good thing. Just an example.

I personally don’t think that contributors need a policy to close PRs and issues which don’t make sense. Even if they’re fully human-produced. It’s not like I’m really against having a policy about this, but I’m not so sure about the practical impact.

1 Like

The bot is only effective because of scripts that people actually wrote, though. Unlike AI. r-ryantm can only produce code that actually evals.

6 Likes

I would hope that it would be apparent (but perhaps not) that vetted, approved internal infrastructure would not be beholden to the “no fully automatic PRs” rule, and that this would instead apply to non-maintainer contributors. But it may be worth some verbiage in the policy clarifying this so people are dissuaded from trying to compare their automated slop PR to a piece of core infrastructure.

5 Likes

OpenSSF’s vulnerability disclosure working group is also working on an AI usage policy, albeit targeted more at vulnerabilities, but still relevant: