It's clear that AI assisted coding is dividing developers (welcome to the culture wars!).
-
@hanshuebner @matt "Capitalism is already producing bad things so we should just accelerate that"

RE: https://toot.cafe/@baldur/116239014761650611
@hanshuebner @matt I think this article really addresses the disagreement at hand here:
-
It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.
How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.
But sure, no, it's really because we mourn the loss of our hobby.
@plexus Luckily it doesn't make coding several orders of magnitude more energy intensive. Check out this guy's blog: https://www.simonpcouch.com/blog/2026-01-20-cc-impact/
He's a heavy Claude Code user (multiple sessions in parallel), got fed up that all estimates are about the "median user" or "median query", and benchmarked his own use. He worked out that the energy cost of his CC use is about the same as working on a high-end desktop PC instead of a laptop. It's not even one order of magnitude more.
If you want to convince people you're not just mourning the "loss" of a hobby that's more accessible now than it's ever been, don't try to justify it with climate misinfo.
-
@matt @hanshuebner I don't disagree with you, but the viewpoint you're putting forth is related to what the OP said this is not about. There is both fundamental information-theoretic reason and abundant empirical evidence that LLM-extruded code is of low quality.
OP's point was that this is not primarily about concern for the craft (like handmade furniture vs factories) but about extremely harmful externalities of using LLMs to extrude code. A big one of which is quality and resulting safety.
@dalias Point taken. And I do care about the harms. Perhaps, in trying to explain why I think LLM-extruded code is a bad idea in principle, I got too close to obsessing over the craft for its own sake. Unlike the people who mourn the death of the craft, I want to *fight* the rise of LLM-extruded code, not resign myself to it.
-
@hanshuebner @matt "Capitalism is already producing bad things so we should just accelerate that"

@dalias @matt I live in capitalism as a software developer. I don't get to choose what tools I use, I'm getting paid to do the work. I can change my profession, or I can pick up what I need to know in order to sustain myself. This is me personally.
Then: LLMs create code that is comparable to human written code in that frame of reference. There is better code, but there is also much worse.
Finally: LLMs create shitty prose, shitty images and shitty music. I hate all of that.
-
@dalias @matt I live in capitalism as a software developer. I don't get to choose what tools I use, I'm getting paid to do the work. I can change my profession, or I can pick up what I need to know in order to sustain myself. This is me personally.
Then: LLMs create code that is comparable to human written code in that frame of reference. There is better code, but there is also much worse.
Finally: LLMs create shitty prose, shitty images and shitty music. I hate all of that.
@hanshuebner @dalias If LLMs create shitty prose, images, and music, why is code the exception? Simply because that's the area that we work in and we're afraid of losing our jobs? (I admit I'm not immune to that fear.)
-
It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.
How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.
But sure, no, it's really because we mourn the loss of our hobby.
@plexus Translators are hearing this all the time too (with a side helping of "you just hate technology" I'm assuming devs don't get!) No, we just want the job done right.
If we'd realised earlier that clients would accept any old shit provided it looked like roughly the right language, we'd all have made a lot more money.
-
@hanshuebner @dalias If LLMs create shitty prose, images, and music, why is code the exception? Simply because that's the area that we work in and we're afraid of losing our jobs? (I admit I'm not immune to that fear.)
-
@hanshuebner @dalias The details still matter though. The same lack of attention to detail that makes LLM prose, images, and music shitty, will come back to bite us, or the people affected by our work, sooner or later, in the form of defects. So I'd rather give each detail the attention it deserves, by writing the code myself, than roll the dice and find out later that some detail in that mass of LLM-extruded code was wrong -- possibly subtly wrong, in a way that's easy to miss in review.
-
@hanshuebner @can @plexus Actually, you are doing great putting my exact feelings into words. Thanks for that!
-
@hanshuebner @dalias The details still matter though. The same lack of attention to detail that makes LLM prose, images, and music shitty, will come back to bite us, or the people affected by our work, sooner or later, in the form of defects. So I'd rather give each detail the attention it deserves, by writing the code myself, than roll the dice and find out later that some detail in that mass of LLM-extruded code was wrong -- possibly subtly wrong, in a way that's easy to miss in review.
@matt @dalias You are absolutely right, but here's the thing: Code review also does not prevent subtle bugs from creeping into the code base when humans wrote the code. Review is just one of the tools that ensure software quality.
This is to say that code written by LLMs and humans suffer from similar issues, require similar care and review and can fail in similar ways. There is more LLM code, though, and there are new challenges because scaling with LLMs works differently than with humans.
-
@matt @dalias You are absolutely right, but here's the thing: Code review also does not prevent subtle bugs from creeping into the code base when humans wrote the code. Review is just one of the tools that ensure software quality.
This is to say that code written by LLMs and humans suffer from similar issues, require similar care and review and can fail in similar ways. There is more LLM code, though, and there are new challenges because scaling with LLMs works differently than with humans.
@hanshuebner @dalias Isn't it obvious, though, that the risks are higher when you have an LLM generate code statistically from a natural-language prompt, as opposed to writing the code and paying attention to every detail yourself?
-
@hanshuebner @dalias Isn't it obvious, though, that the risks are higher when you have an LLM generate code statistically from a natural-language prompt, as opposed to writing the code and paying attention to every detail yourself?
@matt @dalias Statistically, you will have more bugs because you have more software. But also, you can easily create tests, refactor and make executable requirements.
Making good software with LLM support is hard work and takes time. If you look at the stuff that people make with three prompts and then post to LinkedIn, you know what I mean.
A good program requires attention to detail, no matter what the tool does for you.
-
@hanshuebner @flooper @plexus And if your view of the world begins and ends with making money, as I admit is capitalist dogma, fair enough.
But producing code with LLMs - or using them for anything which needs to be correct - is deception (whether you're deceiving yourself or others) on a massive scale, on a par with crypto, Ponzi schemes, climate denial, etc.
(1/2)
-
@matt @dalias Statistically, you will have more bugs because you have more software. But also, you can easily create tests, refactor and make executable requirements.
Making good software with LLM support is hard work and takes time. If you look at the stuff that people make with three prompts and then post to LinkedIn, you know what I mean.
A good program requires attention to detail, no matter what the tool does for you.
@hanshuebner @dalias So then why do it with an LLM as opposed to the hard work of writing the code directly? Is it just to appease capital's irrational demands?
-
@hanshuebner @dalias So then why do it with an LLM as opposed to the hard work of writing the code directly? Is it just to appease capital's irrational demands?
-
@hanshuebner @flooper @plexus And if your view of the world begins and ends with making money, as I admit is capitalist dogma, fair enough.
But producing code with LLMs - or using them for anything which needs to be correct - is deception (whether you're deceiving yourself or others) on a massive scale, on a par with crypto, Ponzi schemes, climate denial, etc.
(1/2)
-
@hanshuebner @flooper @plexus And if your view of the world begins and ends with making money, as I admit is capitalist dogma, fair enough.
But producing code with LLMs - or using them for anything which needs to be correct - is deception (whether you're deceiving yourself or others) on a massive scale, on a par with crypto, Ponzi schemes, climate denial, etc.
(1/2)
Anthropomorphizing them (as many do, but I don't think you are) is a flawed view, but does provide one useful insight.
If one treats an LLM as a person, then the fundamental issue is:
They are a bullshit artist with a huge library. They do not have competence at anything except bullshitting, at which they are superb.
I agree that it's amazing that we can build a mechanical bullshit generator that's good enough to bypass most people's defenses.
-
It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.
How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.
But sure, no, it's really because we mourn the loss of our hobby.
@plexus Because AI did not create a programming language, because AI did not create a compiler, because AI did not create a linker, AI can not create software.
-
@hanshuebner @dalias But then you have to spend time putting guardrails in place (e.g. comprehensive tests) to make sure the LLM doesn't do something wrong; using an LLM is rolling the dice, after all. Now, if you believe that one should always put maximal guardrails in place anyway even for human-written code, then I suppose the faster code generation could still be a net gain. But I'm not sure there's one correct answer to how much one should invest in guardrails (tests, types, lints, etc.).
-
@hanshuebner @flooper @plexus I work for a living and try to avoid dishonesty while doing so.
Since I understand that LLMs are fundamentally and inherently dishonest, that doesn't leave much wiggle room for me.