I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.
-
RE: https://mamot.fr/@pluralistic/116219642373307943
I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me. It's not _wrong_, exactly, but radium paint was also a "normal technology" according to this rubric, and I still very much don't want to get any on me and especially not in my mouth
@glyph Why doesn’t he just use the word Luddite? Maybe because the Luddites were right and that would undermine his argument?
https://fedi01.unicornsparkle.club/@sabrina/statuses/01KHYTN01NRP79KJHFZ4QZC33Y
-
The very fact that things like OpenClaw and Moltbook even *exist* is an indication, to me, that people are *not* making sober, considered judgements about how and where to use LLMs. The fact that they are popular at *all*, let alone popular enough to be featured in mainstream media shows that whatever this cognitive distortion is, it's widespread.
@glyph The "distortion" is from CoVID: https://www.panaccindex.info/p/answered-does-covid-19-harm-the-brain
A facsimile/helper for _thinking_ seems pretty interesting if one suffers from brain fog, cognitive decline, neuro-nnflamation, etc.
-
R relay@relay.mycrowd.ca shared this topic
-
Two statements I believe are consistently correct:
(1) Generative “AI” produces code significantly faster than humans do only when nobody takes sufficient time to understand it (not just in a narrow syntactic sense; also in the context of organizational needs, longer-term plans, interaction with other applications, etc.)
(2) Code nobody understands well is “technical debt” *by definition*, because it takes a disproportionate amount of time and brain power to change or improve.
Conclusion: unless software developers are incredibly disciplined, and have a level of time and autonomy they generally do not have in a major tech company, generative “AI” usage will *consistently* create large amounts of “tech debt”.
@dpnash @glyph
> “AI” usage will *consistently* create large amounts of “tech debt”Um, no. There will be no technical debt in such products. Maintenance is too costly and the shop owners would be tied to some protein techie. They will soon pivot to #disposable #software
If some user fills a bug, the whole thing will be generated anew with its prompt amended like "; make bug-description disappear". Possibly with new UI/UX. For the better, because users will be trained to not report bugs but make workarounds, as bug report might make protein serfs to endure UX change...
-
Furthermore, it is not "nuts" to dismiss the experience of an LLM user. In fact, you must dismiss all experiences of LLM users, even if the LLM user is yourself. Fly by instruments because the cognitive fog is too think for your eyes to see.
Because the interesting, novel thing about LLMs, the thing that makes them dangerous and interesting, is that they are, by design, epistemic disruptors.
They can produce symboloids more rapidly than any thinking mind. Repetition influences cognition.
@glyph it is nuts to dismiss the experience of a paint huffer
-
2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good
many high profile people in tech, who I have respect for, take absolutely unhinged risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology
Maybe they should have.
I also hate the LLM force-feeding, but even before they surged the state of computing was becoming a smoldering wreck. Maybe those "leaders" just had bad judgment all along? IIRC most of them were either rubber-stamping or looking away from the IoT dumpster fire and organizing their curricula around the idea the users can't handle URLs responsibly.
-
@glyph Something that has gotten under my skin for the past year or so is seeing code changes like: large refactors, porting a legacy tool to rust, even minor bugfixes - things that would be a struggle to push through the inertia of code review - get fast tracked when "the AI did it." Like the exact PRs I've written and tried to advocate before and eventually gave up on. The changes and their risks are the same, I can only conclude that the bar is lower for accepting "AI" contributions.
-
What I've observed very recently is that even intelligent people, experienced developers - who know perfectly well that LLMs are just generators of text from statistical models of what someone is likely to write - will still pull up AI written search results and proceed on the automatic assumption that whatever they say is correct.
That is not a general observation. That was this morning, with some senior programmers trying to solve a problem that's prolonging a code freeze.
-
I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.
@glyph you know what that reminds me of?
Bloodletting and handwashing
-
@glyph i've used the term "neural asbestos" before and it feels a lot like that may be the type of thing we're dealing with
And yet Doctorow thinks LLMs are great for him to use for copyediting. Maybe find a less hypocritical person to quote. All Gen AI horrifies me, I visualize environmental destruction with every "prompt."
@kirakira @glyph
https://floss.social/@sstendahl/116220713455956161 -
@glyph you know what that reminds me of?
Bloodletting and handwashing
-
-
@glyph Similarly, “hallucination” and “delusion” are pre-poisoned for use in this scope
I have on occasion made use of “phantasmagoria” around parts of this dynamic, especially for stuff like the droll “omg the AI is learning to lie to us, we’re cooked!” type bullshit posts, but that’s still not expansive enough to include the various other mental affectations
we need other perorations, and better perseverations alongside
@froztbyte @glyph maybe “AI mediated cognitive change”, subtypes “AI mediated cognitive enhancement”, “AI mediated cognitive decline”, and “AI mediated cognitive distortion”?
-
@glyph This basilisk thing (great imagery) is very true in translation. Once you've seen the MT suggestion, with its wonky syntax and not quite right tone, it's very hard to dismiss it. The cognitive load is consequently enormous
@janeishly @glyph it is also very present in art: e.g. once you've seen a partial draft for something (generated), your idea is no longer yours - you're primed by a foreign version of your creation.
like watching a movie before reading the book it was based on.
-
For me, this is the body horror money quote from that Scientific American article:
"participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"
So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.
If you can see it, the basilisk has already won.
@glyph i like to let them sort it out - ask the same question to like 3 models, sort of crude arbitrage

-
1. YES THEY ARE.
They are vibe-coding mission-critical AWS modules. They are generating tech debt at scale. They don't THINK that that's what they're doing. Do you think most programmers conceive of their daily (non-LLM) activities as "putting in lots of bugs"? No, that is never what we say we're doing. Yet, we turn around, and there all the bugs are.
With LLMs, we can look at the mission-critical AWS modules and ask after the fact, were they vibe-coded? AWS says yes https://arstechnica.com/civis/threads/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes.1511983/
@glyph While this is purely anecdotal, it's darkly comical that just yesterday, at work, a "chief architect" explained and described their claude code setup as ... "giving a monkey a machine gun" ... with no irony or shame.
His point was very clearly that he wasn't sure he could trust his setup, but it was still certainly worth it for the perceived gains.
While I've not made many arguments pro/against LLM usage in general (based on how useful they are or aren't), this admission seemed really odd to me.
We're being asked to implement these tools in our workflows, but we're not given guidance on how to do so safely.
And I'm not against experimentation and learning new things--but I think that has its place within a certain context.
You want to give a monkey a machine gun? Well, find someplace safe to do so, and hope nobody gets hurt... but, like, why should I do the same thing?
-
@MrBerard @glyph Psychosis is a broad range? It covers a range of severities - most days, I read to those who don't know me as "kinda weird", most don't think "schizo" - but on my worse days, I definitely read as psychotic.
But - from *my* side of that, the diference is not 'psychotic' or 'not psychotic', it's just a question of how high the volume & intensity is set. The voices haven't *stopped* - ever - since I was 13, for example.That's an interesting example, because my understanding is that hearing voices is more common than people think, and often not accompanied by the symptom cluster that would lead to a psychosis diagnosis.
I think the problem is the underlying model for diagnostic criteria, which was already defective IMO even before AI complicated the picture.
Lexically, a single term blurs the nuances. For a broader, umbrella term, 'AI brainrot' seems more appropriate IMO.
-
@MrBerard @glyph my point being; a lot of the more minor oddities - changes to speech and writing patterns, being swayed more easily by nonsense, groundless beliefs defended disproportionately strongly in a manner resembling delusions being challenged, the cognitive backflips involvd in preserving those beliefs against mounting contrary evidence, etc.
All read as potentially 'psychotic' to me - even in the tame case of 'It's bad except this one little niche exception that I'll defend fiercely!"Again, I am not disagreeing with this point, just with the practical utility of choosing to use the term based on it.
-
Could be sample bias, of course. I only loosely follow the science, and my audience obviously leans heavily skeptical at this point. I wouldn't pretend to *know* that the most dire predictions will come true. I'd much, much rather be conclusively proven wrong about this.
But I'm still waiting.
@glyph Very good analysis, thank you, I'll be passing this around

-
@happyborg oh no
-
@glyph I've been using "AI delusion" for these milder cases. As I understood AI psychosis it pertains only to those cases where people fully lose grasp of reality...
I've seen it used colloquially as "being wrong because of or about AI", but that always hit me like people calling someone "crazy" for doing something odd or impulsive—and that word use isn't really a good look imo.
@glyph Finished Doctorow's thread and... he spends so long arguing that he should be allowed to use an edgy analogy if it works well... but then it kinda really just doesn't work well in context?? He describes (granted, delusional, poorly analyzed) things that capitalism has been making people do forever, but now it's done with AI flavor, and he really wants to call that... psychosis? Like what.