I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.
-
For me, this is the body horror money quote from that Scientific American article:
"participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"
So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.
If you can see it, the basilisk has already won.
@glyph Glyph's Basilisk > Roko's
-
@glyph Honestly - speaking as someone with a psychotic disorder, but who is not a medical professional - "AI psychosis" seems pretty appropriate, from the behaviours I've seen it result in? Even in more mild cases of people babbling inane bullshit, but not like, so far off reality that they're at risk of physical harm (to themself or others)
AI psychosis is appropriate to the cases that stray into psychosis, for sure.
The point here is that foregrounding these cases glossed over all the more subtle cases of affecting the users perception of reality, and these are far more dangerous, if anything by their sheer numbers.
-
AI psychosis is appropriate to the cases that stray into psychosis, for sure.
The point here is that foregrounding these cases glossed over all the more subtle cases of affecting the users perception of reality, and these are far more dangerous, if anything by their sheer numbers.
... Cory being perhaps a case in point.
-
@glyph I'm sure the mechanism - how they got there - has more in common with emotional abuse and brainwashing/indoctrination techniques.
But the end result - that detachment from reality - is kind of the core of the psychosis experience, and trying to find ways to keep tethered and avoid drifting off into wonderland like that is a persistent part of my day-to-day life.
Which is part of *why* I avoid the chatbots like they're carrying the plague.@miss_rodent to be clear I think that the existing cases that have been described are fairly accurately described as AI psychosis, a bunch of them fall into the clinical definition even. my point is that I think it falls on a continuum and there are a wide variety of less dramatic cognitive distortions which don’t look like psychosis but are caused by AI. case in point, the scientific american article, “having one’s views on an issue gently nudged by an autocomplete suggestion” isn’t “psychosis”
-
AI psychosis is appropriate to the cases that stray into psychosis, for sure.
The point here is that foregrounding these cases glossed over all the more subtle cases of affecting the users perception of reality, and these are far more dangerous, if anything by their sheer numbers.
@MrBerard @glyph Psychosis is a broad range? It covers a range of severities - most days, I read to those who don't know me as "kinda weird", most don't think "schizo" - but on my worse days, I definitely read as psychotic.
But - from *my* side of that, the diference is not 'psychotic' or 'not psychotic', it's just a question of how high the volume & intensity is set. The voices haven't *stopped* - ever - since I was 13, for example. -
@glyph
I think there was a study about programmer productivity with LLMs that found that it's ~20% lower while subjectively being reported as ~20% higher?I should have bookmarked it...
@sabik uh I think that’s the METR one? IIRC not the best methodology but it’s still a kinda interesting result and well worth pursuing further https://arxiv.org/abs/2507.09089
-
@glyph my employer mandates AI tool usage and I have been developing software for 15+ years. I also feel quite strongly that rather than a productivity boost what you actually get is sucked into a time vortex for hours and it *feels* productive but actually you saved no time at all. In fact you probably spent more time not less!
@svines some folks have found my post persuasive to their management and it has helped loosen or eliminate some mandates. it’s not advice to eliminate the mandate but just some rubrics for validating its effectiveness; not everyone is receptive but it might be worth a try?
-
@svines some folks have found my post persuasive to their management and it has helped loosen or eliminate some mandates. it’s not advice to eliminate the mandate but just some rubrics for validating its effectiveness; not everyone is receptive but it might be worth a try?
@glyph I love the enthusiasm but I'm a cog in a fortune500 and this decision was made about many levels above my pay grade. I don't think I can convince my boss, their boss and their boss to commit career suicide in the current climate

-
@glyph I love the enthusiasm but I'm a cog in a fortune500 and this decision was made about many levels above my pay grade. I don't think I can convince my boss, their boss and their boss to commit career suicide in the current climate

@svines you obviously know your role and your relationship to your org better than I do :). but this COULD be pitched in a very non-career-suicidal way, i.e.: “hey boss I love the great-great-grandboss’s AI mandate but wouldn’t it be so cool if we had some actual DATA to show how productive it is making our team? I found this formula online…”
-
@MrBerard @glyph Psychosis is a broad range? It covers a range of severities - most days, I read to those who don't know me as "kinda weird", most don't think "schizo" - but on my worse days, I definitely read as psychotic.
But - from *my* side of that, the diference is not 'psychotic' or 'not psychotic', it's just a question of how high the volume & intensity is set. The voices haven't *stopped* - ever - since I was 13, for example.@MrBerard @glyph my point being; a lot of the more minor oddities - changes to speech and writing patterns, being swayed more easily by nonsense, groundless beliefs defended disproportionately strongly in a manner resembling delusions being challenged, the cognitive backflips involvd in preserving those beliefs against mounting contrary evidence, etc.
All read as potentially 'psychotic' to me - even in the tame case of 'It's bad except this one little niche exception that I'll defend fiercely!" -
Cory also correctly points out that "AI psychosis" is probably going to be gatekept by medical establishment scicomm types soon because "psychosis" probably isn't the right word and already carries an unwarranted stigma. And indeed, I think the biggest problem with "psychosis" as a metaphor is going to be that the ways in which AI can warp our minds are mostly NOT going to be catastrophic psychosis, and are not going to have great existing analogs in existing medical literature.
@glyph LLMs seem to use many of the same techniques as mentallists, psychics, fortune-tellers and mediums in how they manipulate their victims, like suggestions, cold reading, flattery, confidence and in the victims confirmation bias and suggestibility. People are influenced, by the politeness and the well structured text, into ignoring factual issues, and then by having a conversation they fix the glaring problems themselves, and later attribute it to the model.
-
@MrBerard @glyph my point being; a lot of the more minor oddities - changes to speech and writing patterns, being swayed more easily by nonsense, groundless beliefs defended disproportionately strongly in a manner resembling delusions being challenged, the cognitive backflips involvd in preserving those beliefs against mounting contrary evidence, etc.
All read as potentially 'psychotic' to me - even in the tame case of 'It's bad except this one little niche exception that I'll defend fiercely!"@MrBerard @glyph (poverty of speech, flat affect, disorganized speech/though, delusions, reduced attention, brain fog, disorientation, confusion, etc. all being pretty common psychosis features - and all coming in various degrees, many of which LLM folks seem to exhibit to various degrees pretty commonly.)
-
The suggestion that the article makes is all about passive monitoring of the amount of time that your LLM projects *actually* take, so you can *know* if you're circling the drain of reprompting and "reasoning". Maybe some people really *are* experiencing this big surge in productivity that just hasn't shown up on anyone's balance sheet yet! But as far as I know, nobody bothers to *check*!
@glyph I like your breakdown in those articles.
I think that some of the more valuable stuff has been not when juniors prompt and don’t get value, but when seniors prompt, go do something else for a bit while the machine churns for a couple of minutes, and then come back to something that is pretty close to a good solution.
Think about a thing that might take you 15 minutes to kinda menially do (add some CLI bo flag that then needs to get passed down 3 layers in some spot for example)
-
@glyph I like your breakdown in those articles.
I think that some of the more valuable stuff has been not when juniors prompt and don’t get value, but when seniors prompt, go do something else for a bit while the machine churns for a couple of minutes, and then come back to something that is pretty close to a good solution.
Think about a thing that might take you 15 minutes to kinda menially do (add some CLI bo flag that then needs to get passed down 3 layers in some spot for example)
@glyph lowering of activation energy is how I see that. And while I agree that the futzing is way undercounted (and that, IMO, a lot of this falls over in longer sessions and is just not worth it)… a strong dev who knows exactly what the solution is supposed to look like can get paper cut-y stuff cleaned up. A lot.
The “whine on slack about a thing being busted” turns into a fix, and most of that you can just go get a cup of water or review something in the meantime. Cool party trick at least
-
@glyph lowering of activation energy is how I see that. And while I agree that the futzing is way undercounted (and that, IMO, a lot of this falls over in longer sessions and is just not worth it)… a strong dev who knows exactly what the solution is supposed to look like can get paper cut-y stuff cleaned up. A lot.
The “whine on slack about a thing being busted” turns into a fix, and most of that you can just go get a cup of water or review something in the meantime. Cool party trick at least
@glyph totally to your point tho… the party trick might just be that. It feels fun to have progress happen when laundry is being folded but in the end I might end up churning anyways
-
@sabik uh I think that’s the METR one? IIRC not the best methodology but it’s still a kinda interesting result and well worth pursuing further https://arxiv.org/abs/2507.09089
@glyph
Thanks, that's the one! -
I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.
@glyph while I am not aware of any study showing the poisonous character of LLMs, two items are already proven:
1. LLMs have a more detrimental effect on software development than they have benefits. Google's DORA report showed now multiple years in a row, that LLM use in SW dev decreases performance and outcomes in most teams.
2. Abuse for malicious intent is rampant, yielding scary propaganda, misinformation, distraction campaigns and intensifies the threat from social engineering attacks -
@svines you obviously know your role and your relationship to your org better than I do :). but this COULD be pitched in a very non-career-suicidal way, i.e.: “hey boss I love the great-great-grandboss’s AI mandate but wouldn’t it be so cool if we had some actual DATA to show how productive it is making our team? I found this formula online…”
@glyph yeah true. I am in charge of setting OKRs for my team so productivity etc is part of that. Another guerilla tactic I thought about was asking our legal team what their thoughts on ai-generated code are now that the US supreme court have refused to hear an appeal to "AI code can't be copyrighted" - that potentially means our company no longer has protection given how much vibe coded stuff is around now
-
@glyph while I am not aware of any study showing the poisonous character of LLMs, two items are already proven:
1. LLMs have a more detrimental effect on software development than they have benefits. Google's DORA report showed now multiple years in a row, that LLM use in SW dev decreases performance and outcomes in most teams.
2. Abuse for malicious intent is rampant, yielding scary propaganda, misinformation, distraction campaigns and intensifies the threat from social engineering attacks@nils_berger have you got a link for that report?
-
@glyph lowering of activation energy is how I see that. And while I agree that the futzing is way undercounted (and that, IMO, a lot of this falls over in longer sessions and is just not worth it)… a strong dev who knows exactly what the solution is supposed to look like can get paper cut-y stuff cleaned up. A lot.
The “whine on slack about a thing being busted” turns into a fix, and most of that you can just go get a cup of water or review something in the meantime. Cool party trick at least
@raphael Believe me, I understand the appeal of the hit of dopamine to get moving when one is stuck. I really want a tool that can do that for me, but I would like to know what other effects it has, and whether it's going to be a net detriment.