I saw a wild take where someone said distributions are fascist for using systemd because systemd now uses Claude for code review.
-
@thesamesam @lanodan @ariadne gotcha, rules for thee but not for me
-
@thesamesam @lanodan @ariadne gotcha, rules for thee but not for me
-
i guess my point here is that reactionary behavior does not really benefit anyone and just leads to bad decisions
@ariadne it's protestantism but swapping the god from the ethereal one to "reason". if you are bad you are tainted permanently and must stone; if they stopped using AI tools it would also not be enough because they are "tainted".
this pattern repeats over and over from people who unlearned one piece but didn't deprogram the religious dogmatic patterns, and you end up here.
is Linux foundation funding the destruction of jobs, removing human contributions, destroying the world with debt, any of that? of course not! but it's still dogma.
I don't have a good answer to this, just to remind people what the actual goals and actions of orgs are and hope they listen.
-
@colinstu but that's the thing. redox is not a project that we can shift our production computing to immediately.
@ariadne indeed it’s not. Yeah the argument right now (to move asap) is just a nonstarter. It’s gong to take time (if ever) to de-AI codebases and projects. There isn’t going to be any simple fix or solution to it
For those who hold onto this, what do they use currently? They actually reap what they sow?
-
@ariadne it's protestantism but swapping the god from the ethereal one to "reason". if you are bad you are tainted permanently and must stone; if they stopped using AI tools it would also not be enough because they are "tainted".
this pattern repeats over and over from people who unlearned one piece but didn't deprogram the religious dogmatic patterns, and you end up here.
is Linux foundation funding the destruction of jobs, removing human contributions, destroying the world with debt, any of that? of course not! but it's still dogma.
I don't have a good answer to this, just to remind people what the actual goals and actions of orgs are and hope they listen.
@ariadne I don't want to see the world eaten by AI but people use the tool and it drives results for them. There's nowhere much else to go.
It's like Stallman arguing for owning every piece of your machine - eventually, you have some closed source firmware blob. Purity vs reality. -
@ariadne indeed it’s not. Yeah the argument right now (to move asap) is just a nonstarter. It’s gong to take time (if ever) to de-AI codebases and projects. There isn’t going to be any simple fix or solution to it
For those who hold onto this, what do they use currently? They actually reap what they sow?
@colinstu at least in my case, every time i've embraced LLM technology, i've come to regret it basically immediately.
case in point: grammarly copyediting feature
-
@thesamesam @bluca @lanodan personally, i don't even think i *care* about LLM-based reviews.
what i care about is LLM-based code generation because every time i've interacted with people using those tools to produce changesets, it's been fucking miserable
-
@thesamesam @bluca @lanodan personally, i don't even think i *care* about LLM-based reviews.
what i care about is LLM-based code generation because every time i've interacted with people using those tools to produce changesets, it's been fucking miserable
-
@thesamesam @bluca @lanodan i guess to me, it feels unnatural and jarring to argue with a chatbot in a code review.
but that is far less harmful than dealing with changesets where the author does not even fucking know what he is submitting and cannot defend his work.
*that* is true misery as a maintainer.
-
@thesamesam @lanodan @ariadne and I'm pointing out that the distinction is specious and a glaring case of double standards. Everyone uses who uses these tools does so in different ways, and you don't get to do moral grandstanding just because you arbitrarily drew a line in the sand where it's most convenient for you, and not a millimeter further. Doesn't work that way, sorry
-
@thesamesam @bluca @lanodan i guess to me, it feels unnatural and jarring to argue with a chatbot in a code review.
but that is far less harmful than dealing with changesets where the author does not even fucking know what he is submitting and cannot defend his work.
*that* is true misery as a maintainer.
@thesamesam @bluca @lanodan basically the problem is AI as force multiplier for charlatanism.
claude making it miserable for charlatans to get their PRs merged actually seems like a positive use of the technology...
-
@thesamesam @ariadne @bluca Kind of still feels bad given how overblown a lot of security vulnerabilities are (I guess ICANN and registries will get more money from website-logo vulns), plus imagine getting a big wave of low-impact security vulnerabilities.
But well that's roughly the same issues as with fuzzers, except it's combined with codegen this time. -
@thesamesam @ariadne @bluca Kind of still feels bad given how overblown a lot of security vulnerabilities are (I guess ICANN and registries will get more money from website-logo vulns), plus imagine getting a big wave of low-impact security vulnerabilities.
But well that's roughly the same issues as with fuzzers, except it's combined with codegen this time. -
@thesamesam @lanodan @bluca yes, but script kiddies also figured out how to use the fuzzers and submit slop to us with "can you tell me about your bug bounty program?"
-
@thesamesam @bluca @lanodan basically the problem is AI as force multiplier for charlatanism.
claude making it miserable for charlatans to get their PRs merged actually seems like a positive use of the technology...
@ariadne @thesamesam @lanodan of course and stuff like that gets shot into the sun with a rocket without mercy.
But you don't argue with chatbots in reviews - these days claudebot is about 90% signal-to-noise ratio. The 10% noise you just dismiss, there's no arguing involved. But that 90% of signal has got really good in the past ~3 months, and there's no point denying it. This stuff was mostly crap until end of last year, but things change, and there's nothing wrong with changing views
-
@thesamesam @lanodan @bluca yes, but script kiddies also figured out how to use the fuzzers and submit slop to us with "can you tell me about your bug bounty program?"
-
@thesamesam @lanodan @bluca yes, but script kiddies also figured out how to use the fuzzers and submit slop to us with "can you tell me about your bug bounty program?"
@ariadne @thesamesam @bluca I think it's the kind of thing where I could end up replying "Here's my hourly rate for support requests" -
I saw a wild take where someone said distributions are fascist for using systemd because systemd now uses Claude for code review.
okay. fine, I guess.
but if we are rejecting dependencies that use AI tooling, where do we go?
seriously. where do we go?
if the Linux kernel is using AI tools for codegen, then where do we go?
FreeBSD? I would put money on it that they use AI tools.
OpenBSD? NetBSD? HURD?
do we hard fork every dependency that is now tainted? do we even have the resources to do it?
FreeBSD and Illumos are the only ones reasonably close in the tech tree and I suspect both use AI tools too, as their development, like Linux, is driven by capital.
@ariadne Unfortunately we need (costly! 🤬) deep analysis of how deep the rot goes.
A good approach is multi-faceted:
- Avoiding introduction of new deps with LLM slop in them
- Holding back packages that are adopting slop when the existing package was essentially "done" and didn't need any heavy maintenance.
- Forking packages that are critical and where the LLM slop being introduced is threatening to create serious vulns or regressions.
- Watching closely in packages where the level of slop is contained so far.The stuff in Linux (kernel) is 🤮 but probably not show-stopping for now, and not easily replaceable or pinnable. But other things can be,
-
I saw a wild take where someone said distributions are fascist for using systemd because systemd now uses Claude for code review.
okay. fine, I guess.
but if we are rejecting dependencies that use AI tooling, where do we go?
seriously. where do we go?
if the Linux kernel is using AI tools for codegen, then where do we go?
FreeBSD? I would put money on it that they use AI tools.
OpenBSD? NetBSD? HURD?
do we hard fork every dependency that is now tainted? do we even have the resources to do it?
FreeBSD and Illumos are the only ones reasonably close in the tech tree and I suspect both use AI tools too, as their development, like Linux, is driven by capital.
> FreeBSD? I would put money on it that they use AI tools.
As of September they were working on a policy -- to ban it.
FreeBSD Project isn't ready to let AI commit code just yet
: But it's OK to use it for docs and translations
(www.theregister.com)
-
@ariadne @thesamesam @lanodan of course and stuff like that gets shot into the sun with a rocket without mercy.
But you don't argue with chatbots in reviews - these days claudebot is about 90% signal-to-noise ratio. The 10% noise you just dismiss, there's no arguing involved. But that 90% of signal has got really good in the past ~3 months, and there's no point denying it. This stuff was mostly crap until end of last year, but things change, and there's nothing wrong with changing views
@bluca @thesamesam @lanodan oh yes, we have been experimenting with it at work for reviews.
it has indeed gotten pretty good.
but i hesitate becoming dependent on it as a FOSS maintainer because while the first hit is free, when the economic reality catches up... it will probably be quite expensive.