I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.
-
@glyph i don't know if it's the best analogy at the end of the day, but my brain keeps going to lead pipes and asbestos. if we're not sure it's safe, should we be such a hurry to put it in everything?
-
@bluewinds @janeishly I don't know that I trust that subjective feeling of disgust either, even though it's definitely how I feel — a kind of aesthetic revulsion, which might be indicative of something real or might be another weird side-effect of these tools that interacts with a certain neurotype in a certain way. Definitely worth the precaution of turning it off though, and it does seem more aligned with the evidence we have at the moment.
@glyph @janeishly Oh, I'm a "certain neurotype," for sure.
I can report with objective certainty though that it was a net drain for my company - because I'm the most senior developer at the company, and making me unhappy with my job cost them several days worth of lost productivity.
Was it "because the technology sucks" or was it "because BlueWinds hates it irrationally"? Either way, it cost the company thousands in wages (of me not doing anything via demotivation and revulsion at the thought of reviewing PRs).
-
@mason because medical practitioners were hard to convince of the impact. And they still don't do it as much as you think.
Science vs human belief, the belief usually wins
-
@jacob I've changed it as best I can, to really focus in on "LLM use" rather than "LLM users" and subjective experience / objective phenomena distinction.
@glyph @jacob FWIW, in close relationships, I often end up in difficult situations because I communicate my opinions with such assuredness that the listener/receiver gets the sense that I am mocking or devaluing their opposing point of view.
(This dynamic has existed in my marriage for over 10 years, and it still creates friction, even though we are both aware of it!)
I don't like having to include qualifiers or disclaimers in things I say, as I think it is implicit that if I believe a certain thing--there is a reason I believe it--and if I am in discussion with you, and we disagree, I want to understand what evidence there exists to prove my understanding wrong. Is it a subjective experience? Is it evidence based on how something is recollected, or on some other 3rd party authority? Etc...
1/2
-
@glyph @jacob FWIW, in close relationships, I often end up in difficult situations because I communicate my opinions with such assuredness that the listener/receiver gets the sense that I am mocking or devaluing their opposing point of view.
(This dynamic has existed in my marriage for over 10 years, and it still creates friction, even though we are both aware of it!)
I don't like having to include qualifiers or disclaimers in things I say, as I think it is implicit that if I believe a certain thing--there is a reason I believe it--and if I am in discussion with you, and we disagree, I want to understand what evidence there exists to prove my understanding wrong. Is it a subjective experience? Is it evidence based on how something is recollected, or on some other 3rd party authority? Etc...
1/2
@glyph @jacob I don't think disagreements are bad. They are useful in guiding us toward new understanding... toward empathy... toward community.
But they can also be divisive... leading us into silos... and creating permanent rifts.
These days, I try to be very cognizant of how I come across, and sometimes insert the necessary disclaimers (i.e., From what I have observed... Based on my experience/recollection... My feelings about this might be wrong, but....) along with the "checking in" that Jacob alluded to earlier (i.e., Why do feel that way?... Is it fair to say that you think X ?...)
It's not foolproof... There are still failures in my personal relationships, and I even have a large abyss with a family member due to political differences...
But I do find that the blast radius is less severe when I'm cognizant of that, and reconciliation is easier if things go too far.
2/2
-
2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good
@glyph I heard nobody ever got fired for buying IBM.
-
The "critic psychosis" thing is tedious and wrong for the same reasons Cory's previous "purity culture" take was tedious and wrong, a transparent and honestly somewhat pathetic attempt at self-justification for his own AI tool use for writing assistance. Which is deeply ironic because it pairs very well with this Scientific American article, which points out that pedestrian "writing AI tools" influence their users in subtle but clearly disturbing ways. https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/
I think you're misinterpreting what @pluralistic@mamot.fr means by "normal." He says:Its uses and abuses are normal. That doesn't make it good, but it does make it unexceptional.
Radium paint was normal. It was also terrible. Poisoning workers and covering it up is not unprecedented, even if you do it with radiation. It's not some new weapon we have no ways of dealing with, just old, tired abuses not getting repossessed and shut down as they must be.
What he means by "critic psychosis" is every time you shout "AI is an incredibly powerful technology that can control people's brains and is more powerful than any brain control ever before!" it really starts to sound like you're promoting AI. Hyperfocusing on the dangers make AI sound more badass than pathetic.
You're talking to these people as if they're not trying to ruin you in every way, as if they have a shred of human decency and don't actually want to cause as much profitable chaos and mayhem as possible. It's like warning the Boogaloo Boys that their actions might cause civil war, as if that wasn't already what they're trying to do.
Also the difference with Radium paint is it only maims and kills people, so rich fucks aren't interested. It reduces the amount and the utility of available slaves for their pleasure. Calling forth the danger of the mythical brain blasting AI on the other hand is music to their ears. -
Could be sample bias, of course. I only loosely follow the science, and my audience obviously leans heavily skeptical at this point. I wouldn't pretend to *know* that the most dire predictions will come true. I'd much, much rather be conclusively proven wrong about this.
But I'm still waiting.
@glyph this thread needs to be an essay, and then a research hypothesis.
I very much feel like I’m watching the last 35 years of my ever-enshittifying social network exposure, sped up 10x and replayed.
In 1991 I remember having the flash of insight - without the life experience to really go into it deeply then - that the way nascent social network tech constrained and shaped interaction was going to force a mass cognitive adaptation for which we were not ready.
-
@glyph this thread needs to be an essay, and then a research hypothesis.
I very much feel like I’m watching the last 35 years of my ever-enshittifying social network exposure, sped up 10x and replayed.
In 1991 I remember having the flash of insight - without the life experience to really go into it deeply then - that the way nascent social network tech constrained and shaped interaction was going to force a mass cognitive adaptation for which we were not ready.
In 2021, we were still suffering the consequences of that, and still not sufficiently adapted to have avoided whatever the fuck is now driving our geopolitical dystopia engine.
And then suddenly our devolved capacity for social cognition had to deal with the fact that dealing with any humans at any distance far enough away that you couldn’t *lick* them came with no assurance that there even was a human there.
-
@delta_vee @kirakira @glyph Leaded gasoline.
@bluewinds @delta_vee @kirakira @glyph I don't think the analogies are good because asbestos is a fantastic insulator, lead is a really helpful additive for petrol and makes fantastic pigments and is really convenient for piping... and the hidden side-effects are the problem. Whereas LLMs _don't_ deliver that primary benefit
LLMs are more like... cheap laminate flooring, produced with wood pulp harvested unsustainably from old-growth forests and made by grossly exploited factory workers overseas... superficially convenient when remodelling your kitchen and rapidly ubiquitous but also quite unsatisfying and a right faff to work around once it's established
-
@bluewinds @delta_vee @kirakira @glyph I don't think the analogies are good because asbestos is a fantastic insulator, lead is a really helpful additive for petrol and makes fantastic pigments and is really convenient for piping... and the hidden side-effects are the problem. Whereas LLMs _don't_ deliver that primary benefit
LLMs are more like... cheap laminate flooring, produced with wood pulp harvested unsustainably from old-growth forests and made by grossly exploited factory workers overseas... superficially convenient when remodelling your kitchen and rapidly ubiquitous but also quite unsatisfying and a right faff to work around once it's established
@bluewinds @delta_vee @kirakira @glyph this post is brought to you by our kitchen floor
-
@bluewinds @delta_vee @kirakira @glyph I don't think the analogies are good because asbestos is a fantastic insulator, lead is a really helpful additive for petrol and makes fantastic pigments and is really convenient for piping... and the hidden side-effects are the problem. Whereas LLMs _don't_ deliver that primary benefit
LLMs are more like... cheap laminate flooring, produced with wood pulp harvested unsustainably from old-growth forests and made by grossly exploited factory workers overseas... superficially convenient when remodelling your kitchen and rapidly ubiquitous but also quite unsatisfying and a right faff to work around once it's established
@jackeric @bluewinds @kirakira @glyph Cheap laminate floors aren't a cognitohazard though (unless you're in interior design

-
@bluewinds @delta_vee @kirakira @glyph this post is brought to you by our kitchen floor
@jackeric @bluewinds @delta_vee @kirakira heh. I am not sure I 100% agree with your framing but all the analogies fall short (after all I do not think we have GOOD evidence that LLMs do any of these things, just hints) and this is an interesting contribution to the pile. but I definitely was thinking "wow it sounds like jack is thinking about laminate flooring really hard" the whole time I was reading it
-
If I could use another inaccurate metaphor, AI psychosis is the "instant decapitation" industrial accident with this new technology. And indeed, most people having industrial accidents are not instantly decapitated. But they might get a scrape, or lose a finger, or an eye. And an infected scrape can still kill you, but it won't look like the decapitation. It looks like you didn't take very good care of yourself. Didn't wash the cut. Didn't notice it fast enough. Skill issue.
@glyph
Here's an industrial accident that's easy to miss:A hydraulic fluid line bursts while you're working on a machine, injecting toxic and/or hot liquid under your skin at high pressure.
https://en.wikipedia.org/wiki/High_pressure_injection_injury
"Although the initial wound often seems minor, the unseen, internal damage can be severe. With hydraulic fluids, paint, and detergents, these injuries are extremely serious as most hydraulic fluids and organic solvents are highly toxic." -
1. YES THEY ARE.
They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.
With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/
Having read over Doctorow's rant-du-jour twice now, I do think when he said "they" were not vibe coding mission-critial AWS modules", he was referring to the "they" in the previous paragraph, being developers he's spoken to, some of whom were friends he knows well.
So.... could be very differently skilled people from "some hack in a code assembly shop driving at a reckless pace because Amazon stock needs a bump".
It's all back to, though, defining "AI".
-
@glyph
Here's an industrial accident that's easy to miss:A hydraulic fluid line bursts while you're working on a machine, injecting toxic and/or hot liquid under your skin at high pressure.
https://en.wikipedia.org/wiki/High_pressure_injection_injury
"Although the initial wound often seems minor, the unseen, internal damage can be severe. With hydraulic fluids, paint, and detergents, these injuries are extremely serious as most hydraulic fluids and organic solvents are highly toxic."@dec23k okay definitely not clicking on that link, yeesh
-
Having read over Doctorow's rant-du-jour twice now, I do think when he said "they" were not vibe coding mission-critial AWS modules", he was referring to the "they" in the previous paragraph, being developers he's spoken to, some of whom were friends he knows well.
So.... could be very differently skilled people from "some hack in a code assembly shop driving at a reckless pace because Amazon stock needs a bump".
It's all back to, though, defining "AI".
@johannab yeah, I get that; what I am suggesting is that Cory is not auditing their work, he is depending on self-reports of their efficacy in using these tools. And those self-reports are highly dubious, and I've watched people be wrong over and over again as they attempted to assess their own LLM-augmented performance.
-
@johannab yeah, I get that; what I am suggesting is that Cory is not auditing their work, he is depending on self-reports of their efficacy in using these tools. And those self-reports are highly dubious, and I've watched people be wrong over and over again as they attempted to assess their own LLM-augmented performance.
@johannab So yes, maybe his contacts are transcendentally better programmers than mine, and they've ascended to a plane of subjective self-assessment beyond mere mortals, but if they're anything like the (extremely skilled, extremely experienced) people I've watched fall into this trap, I'm highly skeptical
-
@johannab So yes, maybe his contacts are transcendentally better programmers than mine, and they've ascended to a plane of subjective self-assessment beyond mere mortals, but if they're anything like the (extremely skilled, extremely experienced) people I've watched fall into this trap, I'm highly skeptical
@johannab the AWS link was to showcase that even AWS itself can't prevent vibe-coding their mission-critical modules, and presumably a few skilled practitioners work there.
-
@johannab yeah, I get that; what I am suggesting is that Cory is not auditing their work, he is depending on self-reports of their efficacy in using these tools. And those self-reports are highly dubious, and I've watched people be wrong over and over again as they attempted to assess their own LLM-augmented performance.
@glyph Fair, for sure.
I just realized when reading it over that was a spot there could be a disconnect between the "they" being referred to in the essay narrative as written.
I feel like my immediate, 1-degree friends, acquaintances and colleagues include amongst them all the theoretical levels of self-awareness we could speak to, and indeed, *I* can't tell one from the other without more examination of context.