Man, every part of this blog post sucks.
-
@xgranade do you think that once someone uses an llm they should be banned from programming forever? Or to draw a parallel like the article, should I, age 8, after copying some BASIC from a magazine and proudly saying “wow I made a game!” be shamed and never let near an editor ever again?
What exactly sucks about this article? They literally say that these tools can lead to disastrous results…
@mnl I said what sucks. Don't be a reply guy about this.
As for your first question, I mean... yeah... if you use a massively unethical tool that is designed to displace open source labor, then that's something that should be taken into serious consideration when evaluating your future work. Labor solidarity is important, not a rhetorical game to win internet points.
-
@mnl I said what sucks. Don't be a reply guy about this.
As for your first question, I mean... yeah... if you use a massively unethical tool that is designed to displace open source labor, then that's something that should be taken into serious consideration when evaluating your future work. Labor solidarity is important, not a rhetorical game to win internet points.
@xgranade I read the article as “there are a lot enthusiastic about building software now, the tools are problematic, but we shouldn’t throw the baby out with the bathwater.” Computers and software at large have always been tools of labor displacement and oppression.
PS: I fully agree on the labor solidarity point. Which is partly why I welcome all the people who find a way to gain some agency over computers, through llms or not.
-
@xgranade I read the article as “there are a lot enthusiastic about building software now, the tools are problematic, but we shouldn’t throw the baby out with the bathwater.” Computers and software at large have always been tools of labor displacement and oppression.
PS: I fully agree on the labor solidarity point. Which is partly why I welcome all the people who find a way to gain some agency over computers, through llms or not.
@mnl "through LLMs or not" is not labor solidarity, it's scabbing.
-
@mnl "through LLMs or not" is not labor solidarity, it's scabbing.
@xgranade ok so if say, a cnc operator comes to me and wants help with the cnc machine search engine they wrote with ChatGPT, what should I do? “Stick to your lane buddy”? I personally told them “that’s amazing, if you ever need some help, feel free me to contact me”, because I want them to be able to search for manuals with their computer without using say, Google Drive.
-
@xgranade ok so if say, a cnc operator comes to me and wants help with the cnc machine search engine they wrote with ChatGPT, what should I do? “Stick to your lane buddy”? I personally told them “that’s amazing, if you ever need some help, feel free me to contact me”, because I want them to be able to search for manuals with their computer without using say, Google Drive.
@mnl Congrats on being a scab. I wouldn't brag about it, but hey.
-
@mnl Congrats on being a scab. I wouldn't brag about it, but hey.
@xgranade I’m genuinely curious what you would tell that person… that they’re a scab? Note that I offered help, that’s it… if not offering help is solidarity then I don’t understand what you are getting at.
-
Look at what happened with Claude Code. We learned via the source code leak that the whole thing is a Rube Goldberg machine of shoddy regexes and Markdown snippets telling Claude to lie, and yet proverbial moments later, Anthropic announces a new product that *totally works this time I swear* and all of a sudden discourse about AI tools "working" is completely reset.
Getting stuck in that discourse loop opens you up to being perpetually distracted from the far more important ethical problems.
@xgranade They linted the regexes this time.
-
Look at what happened with Claude Code. We learned via the source code leak that the whole thing is a Rube Goldberg machine of shoddy regexes and Markdown snippets telling Claude to lie, and yet proverbial moments later, Anthropic announces a new product that *totally works this time I swear* and all of a sudden discourse about AI tools "working" is completely reset.
Getting stuck in that discourse loop opens you up to being perpetually distracted from the far more important ethical problems.
@xgranade "And now before giving you the details of the battle, I bring you a warning: Every one of you listening to my voice, tell the world, tell this to everybody wherever they are. Watch the skies. Everywhere. Keep looking. Keep watching the skies." -- The Thing from Another World
-
Look at what happened with Claude Code. We learned via the source code leak that the whole thing is a Rube Goldberg machine of shoddy regexes and Markdown snippets telling Claude to lie, and yet proverbial moments later, Anthropic announces a new product that *totally works this time I swear* and all of a sudden discourse about AI tools "working" is completely reset.
Getting stuck in that discourse loop opens you up to being perpetually distracted from the far more important ethical problems.
@xgranade The Potemkin Village effect is deleterious to everyone's mental health. When someone has to pretend they don't know a thing, when that thing is wholly morally and ethically wrong, there are tics, tells. Anthropic clearly lied about this and then pulled the "Copyright law for me, but not for thee" crap with the DMCA takedowns after speciously claiming like all the other cloud AI companies they need everyone's data to train on. Checkmate. No credibility, sorry. I was keeping an open mind
-
Man, every part of this blog post sucks.
Eternal November — this new influx of users may be better than the last one
The Software Freedom Conservancy provides a non-profit home and services to Free, Libre and Open Source Software (FLOSS) projects.
Software Freedom Conservancy (sfconservancy.org)
This is what happens to discourse when the focus is on whether AI tools "work," necessarily a complex and shifting topic that gives bad-faith actors lots of room to sow confusion, and not on the ethical catastrophe caused by adopting or allowing AI products into OSS development processes.
@xgranade It is really wishy-washy stuff isn't it? Chains of provenance have to be established, like how models were trained, who operates them, how they are hosted. Claude can't cite its own sources when asked to so; externalized counterparty risk. TANSTAAFL.
-
Man, every part of this blog post sucks.
Eternal November — this new influx of users may be better than the last one
The Software Freedom Conservancy provides a non-profit home and services to Free, Libre and Open Source Software (FLOSS) projects.
Software Freedom Conservancy (sfconservancy.org)
This is what happens to discourse when the focus is on whether AI tools "work," necessarily a complex and shifting topic that gives bad-faith actors lots of room to sow confusion, and not on the ethical catastrophe caused by adopting or allowing AI products into OSS development processes.
@xgranade I love how this says nothing about the fact that letting LLM code into copyleft codebases makes value leak out of the project: https://www.quippd.com/writing/2026/04/08/ai-code-is-hollowing-out-open-source-and-maintainers-are-looking-the-other-way.html
We're just supposed to say "it works, don't it?" and look the other way while contributors get robbed.
-
@pathunstrom Yes, this. I would also argue that slop is far worse than spam, if only from a pure ethical perspective. Put bluntly, spam isn't inherently fascist tech, but slop is, and I wish that blog post even *mentioned* the ethical problems with letting fascist tech into OSS.
@xgranade @pathunstrom Spam, the meat product:
1. Full of fat, cholesterol, salt and sodium nitrite
2. Made in a factory that has indulged in union-busting, exploited immigrant labor, and given people autoimmune diseases from inhaling aerosolized pig brains (https://www.motherjones.com/politics/2011/06/hormel-spam-pig-brains-disease/)
3. Foisted on the people because of racism (Spam dominated the Hawaiian diet in WWII because the US barred people of Japanese descent from fishing)Maybe the canned meat product is an even better analogy than the unwanted email....
-
@xgranade oh, you know I agree there! It's just easier to convince boosters that spam is bad than llms are bad, and thus easier to wedge into "so if thing primarily produces spam, and spam bad, we should be heavily skeptical of thing."
@pathunstrom No, of course, sorry for preaching to the choir. I'm just still mad at the original blog post for acting like the problems with AI are at worst the problems caused by spam.
But I absolutely take your point and agree.
-
Look at what happened with Claude Code. We learned via the source code leak that the whole thing is a Rube Goldberg machine of shoddy regexes and Markdown snippets telling Claude to lie, and yet proverbial moments later, Anthropic announces a new product that *totally works this time I swear* and all of a sudden discourse about AI tools "working" is completely reset.
Getting stuck in that discourse loop opens you up to being perpetually distracted from the far more important ethical problems.
@xgranade this is like when equifax got hacked for not doing security basics, then got away with lying to congress about it, then offered "free credit monitoring" to the affected people. if equifax can get away with it like that, and then later facebook with the whole cambridge analytica deal, anthropic has an idea-style howto manual on how to do it next
-
@xgranade They linted the regexes this time.
-
@xgranade This is the pettiest of complaints but _right off the bat_ the author gets their folk history badly wrong.
I have a lot of complaints about the Eternal September framing in general, but forgetting that it was Eternal because it was the _general public_ getting access (through AOL, principally) and only September because it reminded old-timers of the annual new student rush, is appalling.
Grr argh.
-
R relay@relay.publicsquare.global shared this topic