I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.
-
1. YES THEY ARE.
They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.
With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/
@glyph While this is purely anecdotal, it's darkly comical that just yesterday, at work, a "chief architect" explained and described their claude code setup as ... "giving a monkey a machine gun" ... with no irony or shame.
His point was very clearly that he wasn't sure he could trust his setup, but it was still certainly worth it for the perceived gains.
While I've not made many arguments pro/against LLM usage in general (based on how useful they are or aren't), this admission seemed really odd to me.
We're being asked to implement these tools in our workflows, but we're not given guidance on how to do so safely.
And I'm not against experimentation and learning new things--but I think that has its place within a certain context.
You want to give a monkey a machine gun? Well, find someplace safe to do so, and hope nobody gets hurt... but, like, why should I do the same thing?
-
@MrBerard @glyph Psychosis is a broad range? It covers a range of severities - most days, I read to those who don't know me as "kinda weird", most don't think "schizo" - but on my worse days, I definitely read as psychotic.
But - from *my* side of that, the diference is not 'psychotic' or 'not psychotic', it's just a question of how high the volume & intensity is set. The voices haven't *stopped* - ever - since I was 13, for example.That's an interesting example, because my understanding is that hearing voices is more common than people think, and often not accompanied by the symptom cluster that would lead to a psychosis diagnosis.
I think the problem is the underlying model for diagnostic criteria, which was already defective IMO even before AI complicated the picture.
Lexically, a single term blurs the nuances. For a broader, umbrella term, 'AI brainrot' seems more appropriate IMO.
-
@MrBerard @glyph my point being; a lot of the more minor oddities - changes to speech and writing patterns, being swayed more easily by nonsense, groundless beliefs defended disproportionately strongly in a manner resembling delusions being challenged, the cognitive backflips involvd in preserving those beliefs against mounting contrary evidence, etc.
All read as potentially 'psychotic' to me - even in the tame case of 'It's bad except this one little niche exception that I'll defend fiercely!"Again, I am not disagreeing with this point, just with the practical utility of choosing to use the term based on it.
-
Could be sample bias, of course. I only loosely follow the science, and my audience obviously leans heavily skeptical at this point. I wouldn't pretend to *know* that the most dire predictions will come true. I'd much, much rather be conclusively proven wrong about this.
But I'm still waiting.
@glyph Very good analysis, thank you, I'll be passing this around

-
@happyborg oh no
-
@glyph I've been using "AI delusion" for these milder cases. As I understood AI psychosis it pertains only to those cases where people fully lose grasp of reality...
I've seen it used colloquially as "being wrong because of or about AI", but that always hit me like people calling someone "crazy" for doing something odd or impulsive—and that word use isn't really a good look imo.
@glyph Finished Doctorow's thread and... he spends so long arguing that he should be allowed to use an edgy analogy if it works well... but then it kinda really just doesn't work well in context?? He describes (granted, delusional, poorly analyzed) things that capitalism has been making people do forever, but now it's done with AI flavor, and he really wants to call that... psychosis? Like what.
-
Furthermore, it is not "nuts" to dismiss the experience of an LLM user. In fact, you must dismiss all experiences of LLM users, even if the LLM user is yourself. Fly by instruments because the cognitive fog is too think for your eyes to see.
Because the interesting, novel thing about LLMs, the thing that makes them dangerous and interesting, is that they are, by design, epistemic disruptors.
They can produce symboloids more rapidly than any thinking mind. Repetition influences cognition.
@glyph “You must dismiss all experiences of LLM users”
This is where you lose me. There’s no universe in which I’m comfortable dismissing the lived experiences of people that categorically. The most important lesson I’ve learned from decades of activism is “believe people when they tell you about their experiences” — and I see no reason to change now. I’m not willing to give up my curiosity and empathy and I hope you aren’t either.
-
@glyph i've used the term "neural asbestos" before and it feels a lot like that may be the type of thing we're dealing with
-
I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.
@glyph From everything I've seen, there's some kind of metacognitive subversion and/or corrosion going on - it's the throughline I see from the METR dev study through the LSAT confidence one to the recent "cognitive surrender" paper. Any kind of sustained exposure just obliterates the normal self-regulation and self-evaluation
-
@glyph While this is purely anecdotal, it's darkly comical that just yesterday, at work, a "chief architect" explained and described their claude code setup as ... "giving a monkey a machine gun" ... with no irony or shame.
His point was very clearly that he wasn't sure he could trust his setup, but it was still certainly worth it for the perceived gains.
While I've not made many arguments pro/against LLM usage in general (based on how useful they are or aren't), this admission seemed really odd to me.
We're being asked to implement these tools in our workflows, but we're not given guidance on how to do so safely.
And I'm not against experimentation and learning new things--but I think that has its place within a certain context.
You want to give a monkey a machine gun? Well, find someplace safe to do so, and hope nobody gets hurt... but, like, why should I do the same thing?
@pythonbynight Cory's bit on the byzantine premium echoed your thread in some way. "All this money can't be for nothing, all these people can't be so irrational, there has to be something under that pile of crap."
@glyph -
@glyph “You must dismiss all experiences of LLM users”
This is where you lose me. There’s no universe in which I’m comfortable dismissing the lived experiences of people that categorically. The most important lesson I’ve learned from decades of activism is “believe people when they tell you about their experiences” — and I see no reason to change now. I’m not willing to give up my curiosity and empathy and I hope you aren’t either.
@jacob@social.jacobian.org @glyph@mastodon.social I think I'm currently at a point in my journey where I try very hard to believe people when they talk about what they have experienced internally, and have become increasingly sceptical of people's ability to judge accurately what actually happened and the results (in both cases for pretty much the same reasons as Glyph as I've noticed the difference between my #adhd internal experience and real world what actually happened).
So "using an LLM made me feel a god-like developer!" I'll completely take as your experience. "My productivity went up by 15 times after I started using agents" (actual claim I have seen) will leave me asking for hard evidence and possibly a scientific study.
It's awkward that we use 'experience' to cover both, and I had the same reaction you're expressing when I read that section but assuming (from the context) that Glyph means the second type of experience I think he has a strong argument, if not the clearest wording.
-
@delta_vee @kirakira @glyph Leaded gasoline.
-
@glyph “You must dismiss all experiences of LLM users”
This is where you lose me. There’s no universe in which I’m comfortable dismissing the lived experiences of people that categorically. The most important lesson I’ve learned from decades of activism is “believe people when they tell you about their experiences” — and I see no reason to change now. I’m not willing to give up my curiosity and empathy and I hope you aren’t either.
@jacob Perhaps "dismiss" wasn't the best word choice there, but that's why I included "even if the LLM user is yourself". I dismiss _my own_ experience of LLMs, _as evidence of their quantitative efficacy_. As evidence of their subjective experience, of course it is valid. If it didn't produce the intense subjective experience then there wouldn't be a problem!
There are two reasons that activism teaches us to believe people's lived experience, and neither apply here: …
-
@jacob Perhaps "dismiss" wasn't the best word choice there, but that's why I included "even if the LLM user is yourself". I dismiss _my own_ experience of LLMs, _as evidence of their quantitative efficacy_. As evidence of their subjective experience, of course it is valid. If it didn't produce the intense subjective experience then there wouldn't be a problem!
There are two reasons that activism teaches us to believe people's lived experience, and neither apply here: …
1. The distance between an account of an experience of oppression and the actual event of the oppression is very short. "That man assaulted me" / "that cop beat me". The only way to think that people saying these things are not relaying true information is to believe that they are intentionally lying for personal gain, which just isn't true. (And that's not what I believe about LLM users.)
-
1. The distance between an account of an experience of oppression and the actual event of the oppression is very short. "That man assaulted me" / "that cop beat me". The only way to think that people saying these things are not relaying true information is to believe that they are intentionally lying for personal gain, which just isn't true. (And that's not what I believe about LLM users.)
2. The filter of oppression *itself* means we only hear the accounts of people who are not only probably telling the truth in the first place but had to push through aggressive filtering to even get heard. If you hear one complaint of police violence or SA there's probably hundreds more where that came from. That also doesn't apply. The archetypical LLM user is not silenced by oppression, they're being massively amplified by the largest propaganda apparatus on earth.
-
@pythonbynight Cory's bit on the byzantine premium echoed your thread in some way. "All this money can't be for nothing, all these people can't be so irrational, there has to be something under that pile of crap."
@glyph@ddelemeny @glyph Yup, I read that and smirked ...
Again, "investing" in an open source tooling that will speed up your CI/CD is almost a no-brainer for an organization. They spend zero dollars and reduce costs/risks associated to the problem that the tool is designed to solve. But even then, there are security risks based on supply chain/dependencies that are often scrutinized to no end.
Investing in LLM tooling is supposedly "cheap" (due to subsidies), but the risks include vendor lock in, security vulnerabilities, and weakening worker autonomy (among others). But there seems to be zero scrutiny in spite of that.
-
The "critic psychosis" thing is tedious and wrong for the same reasons Cory's previous "purity culture" take was tedious and wrong, a transparent and honestly somewhat pathetic attempt at self-justification for his own AI tool use for writing assistance. Which is deeply ironic because it pairs very well with this Scientific American article, which points out that pedestrian "writing AI tools" influence their users in subtle but clearly disturbing ways. https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/
@glyph You're so extremely full of yourself that I didn't even finish reading your comment, and I no longer care about anything you have to say. Go touch grass.
-
2. The filter of oppression *itself* means we only hear the accounts of people who are not only probably telling the truth in the first place but had to push through aggressive filtering to even get heard. If you hear one complaint of police violence or SA there's probably hundreds more where that came from. That also doesn't apply. The archetypical LLM user is not silenced by oppression, they're being massively amplified by the largest propaganda apparatus on earth.
@jacob Consider another type of "lived experience" — the racist who says "DEI took my job". It would be a mistake to think that this person is *lying* about their experience — they are clearly motivated to their racism by genuine animus, and maybe they did lose their job — but their indirect, abstract experience of the nebulous entity of "DEI" is not reliable, particularly not in terms of employment statistics. So we are more skeptical in that case, and we look at the numbers.
-
@ddelemeny @glyph Yup, I read that and smirked ...
Again, "investing" in an open source tooling that will speed up your CI/CD is almost a no-brainer for an organization. They spend zero dollars and reduce costs/risks associated to the problem that the tool is designed to solve. But even then, there are security risks based on supply chain/dependencies that are often scrutinized to no end.
Investing in LLM tooling is supposedly "cheap" (due to subsidies), but the risks include vendor lock in, security vulnerabilities, and weakening worker autonomy (among others). But there seems to be zero scrutiny in spite of that.
@ddelemeny @glyph Early on at my current job, I built a tool that I thought was very useful and mentioned that I would like to open source it...
I was ultimately shut down. In the interest of "intellectual property" and other sorts of red tape... And I didn't really feel like fighting it.
So, I couldn't share my tool with the commons, but there are absolutely no qualms about feeding my code to a company that WE PAY, so they can ingest it and charge others for benefitting off of it? ...
Sigh...
-
@jacob Perhaps "dismiss" wasn't the best word choice there, but that's why I included "even if the LLM user is yourself". I dismiss _my own_ experience of LLMs, _as evidence of their quantitative efficacy_. As evidence of their subjective experience, of course it is valid. If it didn't produce the intense subjective experience then there wouldn't be a problem!
There are two reasons that activism teaches us to believe people's lived experience, and neither apply here: …
@glyph You’ve reasoned yourself into a position where anyone who says anything contrary to you is either delusional or lying. You might be right — I don’t think you are but who knows maybe — but even so, that’s just not a position I’m willing to take about anything ever.