I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.
-
Furthermore, it is not "nuts" to dismiss the experience of an LLM user. In fact, you must dismiss all experiences of LLM users, even if the LLM user is yourself. Fly by instruments because the cognitive fog is too think for your eyes to see.
Because the interesting, novel thing about LLMs, the thing that makes them dangerous and interesting, is that they are, by design, epistemic disruptors.
They can produce symboloids more rapidly than any thinking mind. Repetition influences cognition.
I have ADHD. Which means I am experienced in this process of self-denial. I have time blindness. I run an app that tells me how long I've been looking at other apps, because if I trust my subjective perception, I will think I've been looking at YouTube for 10 minutes instead of 4 hours. Every day I need to deny my subjective feelings about how using software is going, in order to function in society.
-
I have ADHD. Which means I am experienced in this process of self-denial. I have time blindness. I run an app that tells me how long I've been looking at other apps, because if I trust my subjective perception, I will think I've been looking at YouTube for 10 minutes instead of 4 hours. Every day I need to deny my subjective feelings about how using software is going, in order to function in society.
This disability gives me a superpower. I'm Geordi with the visor, able to see what everybody else's regular eyes are missing. This is basically where the idea for https://blog.glyph.im/2025/08/futzing-fraction.html originally came from: since I already monitor my time use, and I noticed that my time in LLM apps was WAY out of whack, consistently in "hyperfocus" levels of time-use, without any of the subjective impression of engagement or pleasure. Just dull frustration and surprising amounts of wasted time.
-
This disability gives me a superpower. I'm Geordi with the visor, able to see what everybody else's regular eyes are missing. This is basically where the idea for https://blog.glyph.im/2025/08/futzing-fraction.html originally came from: since I already monitor my time use, and I noticed that my time in LLM apps was WAY out of whack, consistently in "hyperfocus" levels of time-use, without any of the subjective impression of engagement or pleasure. Just dull frustration and surprising amounts of wasted time.
The suggestion that the article makes is all about passive monitoring of the amount of time that your LLM projects *actually* take, so you can *know* if you're circling the drain of reprompting and "reasoning". Maybe some people really *are* experiencing this big surge in productivity that just hasn't shown up on anyone's balance sheet yet! But as far as I know, nobody bothers to *check*!
-
Two statements I believe are consistently correct:
(1) Generative “AI” produces code significantly faster than humans do only when nobody takes sufficient time to understand it (not just in a narrow syntactic sense; also in the context of organizational needs, longer-term plans, interaction with other applications, etc.)
(2) Code nobody understands well is “technical debt” *by definition*, because it takes a disproportionate amount of time and brain power to change or improve.
Conclusion: unless software developers are incredibly disciplined, and have a level of time and autonomy they generally do not have in a major tech company, generative “AI” usage will *consistently* create large amounts of “tech debt”.
@glyph I should add: I am being careful to say “produces”, not “writes”. It is becoming clear that even if we grant that the pre-LLM bottleneck was developer code-authoring speed, in LLM-heavy workflows, the bottleneck is now “verify that this code is ready to deploy”. This is partly because there is so much more code coming in, but even more because far fewer people have any depth of understanding of the code being PR’ed. *All* the incentives lead to people saying “LGTM, it passes tests, ship it.”
-
@glyph I should add: I am being careful to say “produces”, not “writes”. It is becoming clear that even if we grant that the pre-LLM bottleneck was developer code-authoring speed, in LLM-heavy workflows, the bottleneck is now “verify that this code is ready to deploy”. This is partly because there is so much more code coming in, but even more because far fewer people have any depth of understanding of the code being PR’ed. *All* the incentives lead to people saying “LGTM, it passes tests, ship it.”
@dpnash I am, as always, open to seeing real evidence that this is not the case. However, everything I've seen and heard thus far tells me that it is.
Your point (1) could be factually disputed, although I think it would be hard to prove, but your point (2) is just… logically necessary, I think. I cannot imagine ramming the code through a human brain thoroughly enough to actually understand it.
-
@dpnash I am, as always, open to seeing real evidence that this is not the case. However, everything I've seen and heard thus far tells me that it is.
Your point (1) could be factually disputed, although I think it would be hard to prove, but your point (2) is just… logically necessary, I think. I cannot imagine ramming the code through a human brain thoroughly enough to actually understand it.
@dpnash I mean, heck, the whole concept of the very popular problem of "NIH" is that code *already exists* and we *could* use it, but we don't use it *because writing it is an easier way to understand it*!
-
The very fact that things like OpenClaw and Moltbook even *exist* is an indication, to me, that people are *not* making sober, considered judgements about how and where to use LLMs. The fact that they are popular at *all*, let alone popular enough to be featured in mainstream media shows that whatever this cognitive distortion is, it's widespread.
@glyph i have made the analogy before that the llm thing, in and out of tech, feels like the closest thing i could imagine to a metaphoric zombie apocalypse type scenario
-
The suggestion that the article makes is all about passive monitoring of the amount of time that your LLM projects *actually* take, so you can *know* if you're circling the drain of reprompting and "reasoning". Maybe some people really *are* experiencing this big surge in productivity that just hasn't shown up on anyone's balance sheet yet! But as far as I know, nobody bothers to *check*!
I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.
-
I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.
Could be sample bias, of course. I only loosely follow the science, and my audience obviously leans heavily skeptical at this point. I wouldn't pretend to *know* that the most dire predictions will come true. I'd much, much rather be conclusively proven wrong about this.
But I'm still waiting.
-
I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.
@glyph i don't know if it's the best analogy at the end of the day, but my brain keeps going to lead pipes and asbestos. if we're not sure it's safe, should we be such a hurry to put it in everything?
-
I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.
@glyph i've used the term "neural asbestos" before and it feels a lot like that may be the type of thing we're dealing with
-
The suggestion that the article makes is all about passive monitoring of the amount of time that your LLM projects *actually* take, so you can *know* if you're circling the drain of reprompting and "reasoning". Maybe some people really *are* experiencing this big surge in productivity that just hasn't shown up on anyone's balance sheet yet! But as far as I know, nobody bothers to *check*!
@glyph my employer mandates AI tool usage and I have been developing software for 15+ years. I also feel quite strongly that rather than a productivity boost what you actually get is sucked into a time vortex for hours and it *feels* productive but actually you saved no time at all. In fact you probably spent more time not less!
-
The suggestion that the article makes is all about passive monitoring of the amount of time that your LLM projects *actually* take, so you can *know* if you're circling the drain of reprompting and "reasoning". Maybe some people really *are* experiencing this big surge in productivity that just hasn't shown up on anyone's balance sheet yet! But as far as I know, nobody bothers to *check*!
@glyph
I think there was a study about programmer productivity with LLMs that found that it's ~20% lower while subjectively being reported as ~20% higher?I should have bookmarked it...
-
If I could use another inaccurate metaphor, AI psychosis is the "instant decapitation" industrial accident with this new technology. And indeed, most people having industrial accidents are not instantly decapitated. But they might get a scrape, or lose a finger, or an eye. And an infected scrape can still kill you, but it won't look like the decapitation. It looks like you didn't take very good care of yourself. Didn't wash the cut. Didn't notice it fast enough. Skill issue.
@glyph I've been using "AI delusion" for these milder cases. As I understood AI psychosis it pertains only to those cases where people fully lose grasp of reality...
I've seen it used colloquially as "being wrong because of or about AI", but that always hit me like people calling someone "crazy" for doing something odd or impulsive—and that word use isn't really a good look imo.
-
@glyph i've used the term "neural asbestos" before and it feels a lot like that may be the type of thing we're dealing with
-
For me, this is the body horror money quote from that Scientific American article:
"participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"
So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.
If you can see it, the basilisk has already won.
@glyph Glyph's Basilisk > Roko's
-
@glyph Honestly - speaking as someone with a psychotic disorder, but who is not a medical professional - "AI psychosis" seems pretty appropriate, from the behaviours I've seen it result in? Even in more mild cases of people babbling inane bullshit, but not like, so far off reality that they're at risk of physical harm (to themself or others)
AI psychosis is appropriate to the cases that stray into psychosis, for sure.
The point here is that foregrounding these cases glossed over all the more subtle cases of affecting the users perception of reality, and these are far more dangerous, if anything by their sheer numbers.
-
AI psychosis is appropriate to the cases that stray into psychosis, for sure.
The point here is that foregrounding these cases glossed over all the more subtle cases of affecting the users perception of reality, and these are far more dangerous, if anything by their sheer numbers.
... Cory being perhaps a case in point.
-
@glyph I'm sure the mechanism - how they got there - has more in common with emotional abuse and brainwashing/indoctrination techniques.
But the end result - that detachment from reality - is kind of the core of the psychosis experience, and trying to find ways to keep tethered and avoid drifting off into wonderland like that is a persistent part of my day-to-day life.
Which is part of *why* I avoid the chatbots like they're carrying the plague.@miss_rodent to be clear I think that the existing cases that have been described are fairly accurately described as AI psychosis, a bunch of them fall into the clinical definition even. my point is that I think it falls on a continuum and there are a wide variety of less dramatic cognitive distortions which don’t look like psychosis but are caused by AI. case in point, the scientific american article, “having one’s views on an issue gently nudged by an autocomplete suggestion” isn’t “psychosis”
-
AI psychosis is appropriate to the cases that stray into psychosis, for sure.
The point here is that foregrounding these cases glossed over all the more subtle cases of affecting the users perception of reality, and these are far more dangerous, if anything by their sheer numbers.
@MrBerard @glyph Psychosis is a broad range? It covers a range of severities - most days, I read to those who don't know me as "kinda weird", most don't think "schizo" - but on my worse days, I definitely read as psychotic.
But - from *my* side of that, the diference is not 'psychotic' or 'not psychotic', it's just a question of how high the volume & intensity is set. The voices haven't *stopped* - ever - since I was 13, for example.