Well-documented list of free software projects and their use of genAI:https://codeberg.org/small-hack/open-slopware
-
Well-documented list of free software projects and their use of genAI:
https://codeberg.org/small-hack/open-slopwareIt’s already a long list that shows what looks like uncritical adoption, both by high-profile projects (systemd, VLC, etc.) and by niche projects (GNU Mach is a prime example).
The most surprising for me is Anubis.
-
Well-documented list of free software projects and their use of genAI:
https://codeberg.org/small-hack/open-slopwareIt’s already a long list that shows what looks like uncritical adoption, both by high-profile projects (systemd, VLC, etc.) and by niche projects (GNU Mach is a prime example).
@civodul
AFAIK are for Hurd projects like GNU Mach LLMs "only" used to point out possible problems. Code should always be written by humans. -
@civodul "what looks like uncritical adoption" is kind of irresponsible to say without perusing the very projects you mention by name at least
@hipsterelectron I agree that the categorization is a bit too extremist. But the list is a good starting point for doing one's own explorations.
-
The most surprising for me is Anubis.
@khinsen they haven't accepted LLM contributions which is a really significant distinction
-
Well-documented list of free software projects and their use of genAI:
https://codeberg.org/small-hack/open-slopwareIt’s already a long list that shows what looks like uncritical adoption, both by high-profile projects (systemd, VLC, etc.) and by niche projects (GNU Mach is a prime example).
Meanwhile, @civodul@toot.aquilenet.fr, at Oracle:
Contributions in the OpenJDK Community must not include content generated, in part or in full, by large language models, diffusion models, or similar deep-learning systems. Content, in this context, includes but is not limited to source code, text, and images in OpenJDK Git repositories, GitHub pull requests, e-mail messages, wiki pages, and JBS issues.
I want this so bad for Guix <img class="not-responsive emoji" src="https://awkward.place/emoji/stolen/blobsadfrown.png" title=":blobsadfrown:" />
-
@hipsterelectron I agree that the categorization is a bit too extremist. But the list is a good starting point for doing one's own explorations.
@khinsen @civodul i'm glad to see they provide citations now. the first version of this i saw a few weeks ago didn't. i had to delete my initial reply which failed to examine it before responding and it seems like a good change. their labels are not remotely helpful and seem intended to obfuscate. i really do not respect the categorization they employ but do not contest that the projects they include are all worth listing (including the ones @civodul mentioned in OP). i just have a strong aversion to the failure to make distinctions which i feel harms the ability to help the users of this list to extend the analysis beyond LLMs to e.g. surveillance and other harmful influences
-
Meanwhile, @civodul@toot.aquilenet.fr, at Oracle:
Contributions in the OpenJDK Community must not include content generated, in part or in full, by large language models, diffusion models, or similar deep-learning systems. Content, in this context, includes but is not limited to source code, text, and images in OpenJDK Git repositories, GitHub pull requests, e-mail messages, wiki pages, and JBS issues.
I want this so bad for Guix <img class="not-responsive emoji" src="https://awkward.place/emoji/stolen/blobsadfrown.png" title=":blobsadfrown:" />
-
@khinsen @civodul i'm glad to see they provide citations now. the first version of this i saw a few weeks ago didn't. i had to delete my initial reply which failed to examine it before responding and it seems like a good change. their labels are not remotely helpful and seem intended to obfuscate. i really do not respect the categorization they employ but do not contest that the projects they include are all worth listing (including the ones @civodul mentioned in OP). i just have a strong aversion to the failure to make distinctions which i feel harms the ability to help the users of this list to extend the analysis beyond LLMs to e.g. surveillance and other harmful influences
@khinsen @civodul come to think of it, maybe i could be my own change and make such a table for surveillance of different varieties. i'm sorry @civodul for my initial response since i fully believe you to be aware of and thoughtful about this. i was clearly being defensive and that's extremely unhelpful here. i will try very hard to avoid this and i admire your ability to accept hard truths
-
Well-documented list of free software projects and their use of genAI:
https://codeberg.org/small-hack/open-slopwareIt’s already a long list that shows what looks like uncritical adoption, both by high-profile projects (systemd, VLC, etc.) and by niche projects (GNU Mach is a prime example).
@civodul This list is so devastating. KOReader, Hugo, AntennaPod were great projects…
-
Well-documented list of free software projects and their use of genAI:
https://codeberg.org/small-hack/open-slopwareIt’s already a long list that shows what looks like uncritical adoption, both by high-profile projects (systemd, VLC, etc.) and by niche projects (GNU Mach is a prime example).
@civodul This list is poorly curated. FreeBSD was included with a link to a commit I authored (without LLM use) as "evidence", because a report submitted to the security team made use of an LLM. It currently links to https://github.com/freebsd/freebsd-src?tab=contributing-ov-file#quality-expectations as evidence of a permissive AI policy.
-
@civodul This list is poorly curated. FreeBSD was included with a link to a commit I authored (without LLM use) as "evidence", because a report submitted to the security team made use of an LLM. It currently links to https://github.com/freebsd/freebsd-src?tab=contributing-ov-file#quality-expectations as evidence of a permissive AI policy.
@emaste I guess they consider “permissive” anything that doesn’t explicitly forbid genAI-assisted contributions.
I don’t see a commit link for FreeBSD, but maybe that’s because you reported it before?
-
@emaste I guess they consider “permissive” anything that doesn’t explicitly forbid genAI-assisted contributions.
I don’t see a commit link for FreeBSD, but maybe that’s because you reported it before?
@civodul Yeah, I submitted a ticket about misleading information for FreeBSD and they subsequently removed the commit links.
-
@civodul > A policy that permits the use of AI/LLMs in any capacity or is declared to be vibecoded. Both vibecoding and opening the door for people to vibecode count as a permissive AI policy.
What a big huge dumb pile of bollocks this is
@Profpatsch Yeah well, it’s a questionable categorization; I guess their goal is to distinguish between those forbid/allow/boast-about use of LLMs.
I dislike the pointing-fingers aspect of it, but I find the links to policies etc. quite valuable.
-
@civodul "what looks like uncritical adoption" is kind of irresponsible to say without perusing the very projects you mention by name at least
@hipsterelectron Yeah sorry, that was poorly worded! Rather I guess we can conclude from this that there’s some acceptance of genAI-produced code, but of course with varying degrees and differing policies.
(The fact that many projects have policies in place suggests they are, indeed, critical, regardless of the take of their policy.)
-
@emaste I guess they consider “permissive” anything that doesn’t explicitly forbid genAI-assisted contributions.
I don’t see a commit link for FreeBSD, but maybe that’s because you reported it before?
@civodul@toot.aquilenet.fr @emaste@mastodon.social i think it was citing the text that was removed in this commit, which may imply AI-generated code is acceptable
-
Well-documented list of free software projects and their use of genAI:
https://codeberg.org/small-hack/open-slopwareIt’s already a long list that shows what looks like uncritical adoption, both by high-profile projects (systemd, VLC, etc.) and by niche projects (GNU Mach is a prime example).
@civodul This list makes me sad.

-
@civodul@toot.aquilenet.fr @emaste@mastodon.social i think it was citing the text that was removed in this commit, which may imply AI-generated code is acceptable
-
@civodul > A policy that permits the use of AI/LLMs in any capacity or is declared to be vibecoded. Both vibecoding and opening the door for people to vibecode count as a permissive AI policy.
What a big huge dumb pile of bollocks this is
@Profpatsch@mastodon.xyz @civodul@toot.aquilenet.fr It could be a very useful project if only it were more nuanced. Right now the labels are probably only useful for people who take an extremely anti-AI stance (so plenty on Fedi I guess). The sources make everything better, though.
-
@Profpatsch Yeah well, it’s a questionable categorization; I guess their goal is to distinguish between those forbid/allow/boast-about use of LLMs.
I dislike the pointing-fingers aspect of it, but I find the links to policies etc. quite valuable.
@civodul@toot.aquilenet.fr @Profpatsch@mastodon.xyzWhat a big huge dumb pile of bollocks this is
...Yeah well, it’s a questionable categorization
Why?
I believe most credible evidence points at the likelihood that use of current generative AI leads to deskilling and prevails against upskilling. I also believe that you cannot make up with automated testing what competent human software developers avoid doing in the first place.
To me this paints a picture. Projects that allow AI use are choosing to trade off short-term gains for long-term losses. I want nothing to do with software produced with that mindset, and I question the judgement of people who welcome it. Any list that helps me identify projects heading in this direction is a great help, exactly how uBlock Origin is a great help against adware.
At a "philosophical" level, I don't think a software project that involves closed, proprietary tools (AI or otherwise) as a key part of the development process has any business calling itself "free and open source". People who only care about getting the end result faster might disagree, but to me FOSS has always been a political project, and that project is compromised by deeply incorporating proprietary technology, in my opinion. The means matter more than the ends in this view.
With all that said, I struggle to see how this is "bollocks" or "a questionable categorization". I think it's as vital as the FOSS/not FOSS distinction, or the adware/not adware distinction.
-
@khinsen @civodul i'm glad to see they provide citations now. the first version of this i saw a few weeks ago didn't. i had to delete my initial reply which failed to examine it before responding and it seems like a good change. their labels are not remotely helpful and seem intended to obfuscate. i really do not respect the categorization they employ but do not contest that the projects they include are all worth listing (including the ones @civodul mentioned in OP). i just have a strong aversion to the failure to make distinctions which i feel harms the ability to help the users of this list to extend the analysis beyond LLMs to e.g. surveillance and other harmful influences
@hipsterelectron @khinsen @civodul Yeah, judging from the cross-section of provided citations, a distinction between "considered the issues without an unambiguous conclusion", "said LLM use might be okay" and "oh no this is going to turn into a steaming pile of shit isn't it" might be useful