I'm late to the party, it would seem
-
Where have you seen LLM generated code being merged into the kernel?
open-slopware
open-slopware - Free/Open Source Software tainted by LLM developers/developed by genAI boosters, along with alternatives. Fork of the repo by @gen-ai-transparency after its deletion.
Codeberg.org (codeberg.org)
-
Could you link to an article or a write up on the stuff you're referencing so I can look into it more, please?
open-slopware
open-slopware - Free/Open Source Software tainted by LLM developers/developed by genAI boosters, along with alternatives. Fork of the repo by @gen-ai-transparency after its deletion.
Codeberg.org (codeberg.org)
-
open-slopware
open-slopware - Free/Open Source Software tainted by LLM developers/developed by genAI boosters, along with alternatives. Fork of the repo by @gen-ai-transparency after its deletion.
Codeberg.org (codeberg.org)
Did you read Linus’ response on the mailing list? From that linked article
Given his sole control over what gets merged, thats all that matters.
also note hes discussing ai as a tool for reviewing patches. nothing about ai actually writing code.
make of that what you will
I have not yet seen any evidence of actual ai slop code being merged into the kernel
-
Did you read Linus’ response on the mailing list? From that linked article
Given his sole control over what gets merged, thats all that matters.
also note hes discussing ai as a tool for reviewing patches. nothing about ai actually writing code.
make of that what you will
I have not yet seen any evidence of actual ai slop code being merged into the kernel
Linux, of all projects, should be opposed to these kind of things as a whole. Sure, we could argue it's not as bad, but I'm not comforted by that statement. The fact that those in charge don't see or care about the obvious problems is shocking.
-
That's not the whole issue.
-
Linux, of all projects, should be opposed to these kind of things as a whole. Sure, we could argue it's not as bad, but I'm not comforted by that statement. The fact that those in charge don't see or care about the obvious problems is shocking.
Im not trying to say one way or the other, or take anyone’s side here.
just putting into context “ai generated code in the linux kernel” isnt whats currently happening.
unless you have evidence otherwise.
edit - KDEs response maybe clarifies “ai” to me in this discussion.
We agree and we agree with many of your objections. AI has become a synonym of tech irresponsibility, greed and exploitation, like crypto was before it. The difference is AI existed before the current craze and pursued legitimate goals. That is still happening in some areas of AI research and ignoring all uses of AI would be throwing the baby out with the bath water.
LLM providers like OpenAI are scum. But the general technology around “ai” isnt as bad as that
-
@cloudskater a lot of Linux stuff can run on FreeBSD as well by the way ;3
That's good news for sure, but this is still a massive slap in the face and even if AI code isn't being injected directly into the kernel, I can't just switch everything around on a dime. I'm sorry if I sound bitchy, but I'm so proud of how much I've learned and I've gushed to people about Linux and open source stuff and how amazing it all is and how it's free from this kind of bullshit. It's especially frustrating since I struggle to grasp all the ins and outs of what this means for Linux, and if it really is this bad I am amazed at the lack of outrage I've heard.
-
Im not trying to say one way or the other, or take anyone’s side here.
just putting into context “ai generated code in the linux kernel” isnt whats currently happening.
unless you have evidence otherwise.
edit - KDEs response maybe clarifies “ai” to me in this discussion.
We agree and we agree with many of your objections. AI has become a synonym of tech irresponsibility, greed and exploitation, like crypto was before it. The difference is AI existed before the current craze and pursued legitimate goals. That is still happening in some areas of AI research and ignoring all uses of AI would be throwing the baby out with the bath water.
LLM providers like OpenAI are scum. But the general technology around “ai” isnt as bad as that
Right. Sorry, I'll try to make the post clearer bc I'm not trying to mislead anyone, I'm just so upset that this is even being entertained by Linux devs, it's boiling my blood.
-
That's not the whole issue.
It is for most people. You are welcome to create your own issues in life if you wish. But Linus Torvalds is infamous for his meticulous, detailed, thorough, and often expletive-laden code reviews, and if he is willing to review AI-generated or AI-assisted code that's entirely up to him, there is no indication he is willing to lower his coding standards one iota. He trusts his maintainers not to bring him any shitty AI code, but he's giving them the freedom to make that choice themselves. If they abuse it, he will punish them. There's no doubt if you know anything about how the Linux kernel development process works.
-
Right. Sorry, I'll try to make the post clearer bc I'm not trying to mislead anyone, I'm just so upset that this is even being entertained by Linux devs, it's boiling my blood.
Just edited my reply for context. Hopefully it explains at least my personal view.
LLMs provided by billionaire techbros? Burn that shit to the ground.
The scientific idea and application of ai when its helpful and relevant I dont see a problem.
The difference being no vibe coded ai generated bullshit ends up in the kernel. The use of this technology elsewhere can be completely fine, if its treated correctly.
But Im totally on your side with “openai llm vibe coded slop should never ever land in the kernel”. And I trust linus on that. given his history with regular 100% human maintainers, he wouldnt let that garbage slide.
“ai” as a term has become synonymous with openai, anthropic, gemini. theyre just LLM products sold by companies. They should never be near real critical production systems. But the wider scientific/technology side could be applied ethically - without using those LLM products