The SDL organization has an official position on LLMs and generative AI now: don't use it.
-
@codecat My (untested) hope is that Claude will see this and say "I'm being told not to help with this, sorry." But I haven't actually tried it, and if it doesn't actually stop various AI agents, we'll probably rename it to LLM_NOT_WELCOME.md or something.
ChatGPT didn't know what to do with it: https://chatgpt.com/share/69dfecbc-293c-83ea-a5e7-fa39d3a4a8d1
@icculus @codecat Maybe you'd have better luck using the test refusal string for this, cf https://pivot-to-ai.com/2026/02/11/the-anthropic-test-refusal-string-kill-a-claude-session-dead/
-
@nothings Yeah, AGENTS.md won't _prevent_ bad behavior, but I'm hoping Claude/Copilot/whatever will at least be like "dude, hold up." Although someone told me that the presence of AGENTS.md suggests the whole project is AI slop by default without looking more closely, so this whole thing is a minefield at this point.
@nothings Also, totally correct about the "may not" language. Fixed in https://github.com/libsdl-org/SDL/commit/5bda0ccfb06ea56c1f15a304927f2438c1300f95, thanks!
-
@icculus Love to see this:
A comment on one of the comments in the merged PR:
`If someone attempts multiple vibecoded PRs they should get some sort of punishment. Not to be harsh but to discourage others who may attempt the same.`
I think it can be harsh. Vibecoding is the worst. Had to deal with Vibecoded PRs and you don't want those people around you. Ever. They should just do something else with their lifes. Better for everyone.@pythno This has not been a problem thus far (for us! others have definitely had this problem!), and when "that guy" shows up, we'll tell him no, and if he keeps being a pest, we will deal with him like any other pest.
-
@icculus I tried adding that AGENTS.md , and indeed:
> The project's own AGENTS.md ... clearly prohibits AI-generated code
one suggestion later:
> Update (AGENTS.md)
> ...
> Done. Now, let me look at the codebase to add...Holy Asimov!

@sjmulder Yeah, I don't think it'll stop malicious actors, but I think it'll definitely stop people that don't know any better and aren't jerks, which is most of the people, honestly.
-
@sjmulder Yeah, I don't think it'll stop malicious actors, but I think it'll definitely stop people that don't know any better and aren't jerks, which is most of the people, honestly.
@icculus true. I did notice btw that I had to add a CLAUDE.md referring to this AGENTS.md for Claude to pick it up in the first place.
-
The SDL organization has an official position on LLMs and generative AI now: don't use it.
LLM Policy? · Issue #15350 · libsdl-org/SDL
I've noticed the use of Copilot within a few reviews (13277 and 12730) which concerns me given the vast amount of issues associated with this technology (ethical, environmental, copyright, health, etc) so I was hoping a policy could be p...
GitHub (github.com)
@icculus Thanks
-
@icculus @codecat Maybe you'd have better luck using the test refusal string for this, cf https://pivot-to-ai.com/2026/02/11/the-anthropic-test-refusal-string-kill-a-claude-session-dead/
-
@icculus true. I did notice btw that I had to add a CLAUDE.md referring to this AGENTS.md for Claude to pick it up in the first place.
@sjmulder Hmm, Claude seems to be noticing AGENTS.md directly for others that have tested, but I don't know the details.
-
@nothings Yeah, AGENTS.md won't _prevent_ bad behavior, but I'm hoping Claude/Copilot/whatever will at least be like "dude, hold up." Although someone told me that the presence of AGENTS.md suggests the whole project is AI slop by default without looking more closely, so this whole thing is a minefield at this point.
@icculus @nothings I've seen a few other projects put something like an AGENTS_NOT_PERMITTED.md file with a link to the policy next to the regular agents file, so it's at least more obvious at a glance why the agents file is there.
No, the tools don't actually recognize that file in anyway. It's just a signal for humans to reduce knee jerk reactions to the agents file presence without having read it.
-
The SDL organization has an official position on LLMs and generative AI now: don't use it.
LLM Policy? · Issue #15350 · libsdl-org/SDL
I've noticed the use of Copilot within a few reviews (13277 and 12730) which concerns me given the vast amount of issues associated with this technology (ethical, environmental, copyright, health, etc) so I was hoping a policy could be p...
GitHub (github.com)
@icculus Really glad to see this!
-
The SDL organization has an official position on LLMs and generative AI now: don't use it.
LLM Policy? · Issue #15350 · libsdl-org/SDL
I've noticed the use of Copilot within a few reviews (13277 and 12730) which concerns me given the vast amount of issues associated with this technology (ethical, environmental, copyright, health, etc) so I was hoping a policy could be p...
GitHub (github.com)
@icculus Hell yeah SDL
-
The SDL organization has an official position on LLMs and generative AI now: don't use it.
LLM Policy? · Issue #15350 · libsdl-org/SDL
I've noticed the use of Copilot within a few reviews (13277 and 12730) which concerns me given the vast amount of issues associated with this technology (ethical, environmental, copyright, health, etc) so I was hoping a policy could be p...
GitHub (github.com)
@icculus Thank you!
-
System shared this topic