I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me.
-
I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.
@glyph my hypothesis on that is that, by virtue of literally being encodings of lexical fields and semantic proximity, and by virtue of their response being the logical continuation of the user's input, LLMs statistically pick up on and amplify subtle tendencies / biases in the user: if you feed it input that uses vocabulary and idioms semantically linked to low self-esteem, the model will more likely compute a reply with similar undertones, feeding said emotion. they amplify whatever emotion you put in, even accidentally.
(thread here: https://tech.lgbt/@nicuveo/116210599322080105 ) -
The "critic psychosis" thing is tedious and wrong for the same reasons Cory's previous "purity culture" take was tedious and wrong, a transparent and honestly somewhat pathetic attempt at self-justification for his own AI tool use for writing assistance. Which is deeply ironic because it pairs very well with this Scientific American article, which points out that pedestrian "writing AI tools" influence their users in subtle but clearly disturbing ways. https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/
@glyph@mastodon.social Cory has outsized influence considering his role as AI ambassador. His writings for the past few years reek of AI Slop. Book after book of rehashes of the same topic. I stopped buying his books.
-
@glyph i don't know if it's the best analogy at the end of the day, but my brain keeps going to lead pipes and asbestos. if we're not sure it's safe, should we be such a hurry to put it in everything?
-
For me, this is the body horror money quote from that Scientific American article:
"participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"
So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.
If you can see it, the basilisk has already won.
@glyph This basilisk thing (great imagery) is very true in translation. Once you've seen the MT suggestion, with its wonky syntax and not quite right tone, it's very hard to dismiss it. The cognitive load is consequently enormous
-
@glyph I'm honestly wondering just how much undiagnosed long COVID is playing into this.
I'm slowly recovering now, well as much as I can, but at the time I was painfully aware weird stuff was happening to my brain because I got caught in the first wave in March 2020.
So I am wondering if the addictive effects of using these LLMs along with existing cognitive damage is a partial cause.
-
@nils_berger have you got a link for that report?
@glyph @nils_berger
this study argues that it encourages cognitive outsourcing on a new level, which in long term period could result in getting used to less cognitive activity, at least for certain tasks.link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
-
RE: https://mamot.fr/@pluralistic/116219642373307943
I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me. It's not _wrong_, exactly, but radium paint was also a "normal technology" according to this rubric, and I still very much don't want to get any on me and especially not in my mouth
@glyph it's difficult to understand why anyone with Cory's reputation would decide to die on such ridiculous hill
-
@nils_berger have you got a link for that report?
@glyph @nils_berger
i think most people are just referring to these blog posts:
DORA | Balancing AI tensions: Moving from AI adoption to effective SDLC use
DORA is a long running research program that seeks to understand the capabilities that drive software delivery and operations performance. DORA helps teams apply those capabilities, leading to better organizational performance.
(dora.dev)
-
@crazyjaneway @glyph We had a client use it to give them permission to spam out their new thing, after we'd explained (and their local IT guy also explained) that if they did that on our servers we'd lock their account.
Which we then did. The client said, "ChatGPT said I could do it". The sycophancy combined with overconfidence is utterly frightening.
I don't particularly like it when my friends use it in their communication with me either.
AI and that Guy at the bar
In tech we've always had evangelists, weither it's for FOSS, or Blockchain or now AI. It's a natural thing to do. You have a tech you'r...
cobbles (dotart.blog)
-
@nils_berger have you got a link for that report?
This is the link to download it:
DORA | State of AI-assisted Software Development 2025
DORA is a long running research program that seeks to understand the capabilities that drive software delivery and operations performance. DORA helps teams apply those capabilities, leading to better organizational performance.
(dora.dev)
Not sure if there's a mirror
-
For me, this is the body horror money quote from that Scientific American article:
"participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"
So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.
If you can see it, the basilisk has already won.
@glyph don't look at it!
Or even better, the Doctor Who version:
-
RE: https://mamot.fr/@pluralistic/116219642373307943
I wish I could recommend this piece more, because it makes a bunch of great points, but the "normal technology" case feels misleading to me. It's not _wrong_, exactly, but radium paint was also a "normal technology" according to this rubric, and I still very much don't want to get any on me and especially not in my mouth
@glyph Why doesn’t he just use the word Luddite? Maybe because the Luddites were right and that would undermine his argument?
Phie Lux (@sabrina@fedi01.unicornsparkle.club)
Imagine if, at the start of the Industrial Revolution, we as a species had paused and asked ourselves what the ethical implications are and what the possible and present harms could be. Maybe we could have avoided the worst excesses of modern society like pollution, increasing inequality, overconsumption, climate change, fascism, and social atomization. If we are truly at the start of another such technological revolution, maybe we should learn from history and not dive head first into it. Especially when we know a lot of the ethical issues and real harms already. It seems plainly foolish to look at the harm we’ve done to ourselves with the last technological revolution and decide to just double down on it.
fedi01.unicornsparkle.club (fedi01.unicornsparkle.club)
-
The very fact that things like OpenClaw and Moltbook even *exist* is an indication, to me, that people are *not* making sober, considered judgements about how and where to use LLMs. The fact that they are popular at *all*, let alone popular enough to be featured in mainstream media shows that whatever this cognitive distortion is, it's widespread.
@glyph The "distortion" is from CoVID: https://www.panaccindex.info/p/answered-does-covid-19-harm-the-brain
A facsimile/helper for _thinking_ seems pretty interesting if one suffers from brain fog, cognitive decline, neuro-nnflamation, etc.
-
R relay@relay.mycrowd.ca shared this topic
-
Two statements I believe are consistently correct:
(1) Generative “AI” produces code significantly faster than humans do only when nobody takes sufficient time to understand it (not just in a narrow syntactic sense; also in the context of organizational needs, longer-term plans, interaction with other applications, etc.)
(2) Code nobody understands well is “technical debt” *by definition*, because it takes a disproportionate amount of time and brain power to change or improve.
Conclusion: unless software developers are incredibly disciplined, and have a level of time and autonomy they generally do not have in a major tech company, generative “AI” usage will *consistently* create large amounts of “tech debt”.
@dpnash @glyph
> “AI” usage will *consistently* create large amounts of “tech debt”Um, no. There will be no technical debt in such products. Maintenance is too costly and the shop owners would be tied to some protein techie. They will soon pivot to #disposable #software
If some user fills a bug, the whole thing will be generated anew with its prompt amended like "; make bug-description disappear". Possibly with new UI/UX. For the better, because users will be trained to not report bugs but make workarounds, as bug report might make protein serfs to endure UX change...
-
Furthermore, it is not "nuts" to dismiss the experience of an LLM user. In fact, you must dismiss all experiences of LLM users, even if the LLM user is yourself. Fly by instruments because the cognitive fog is too think for your eyes to see.
Because the interesting, novel thing about LLMs, the thing that makes them dangerous and interesting, is that they are, by design, epistemic disruptors.
They can produce symboloids more rapidly than any thinking mind. Repetition influences cognition.
@glyph it is nuts to dismiss the experience of a paint huffer
-
2. If it is "nuts" to dismiss this experience, then it would be "nuts" to dismiss mine: I have seen many, many high profile people in tech, who I have respect for, take *absolutely unhinged* risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology. It reads like a kind of cognitive decline. It's scary. And many of these people are *leaders* who use their influence to steamroll objections to these tools because they're "obviously" so good
many high profile people in tech, who I have respect for, take absolutely unhinged risks with LLM technology that they have never, in decades-long careers, taken with any other tool or technology
Maybe they should have.
I also hate the LLM force-feeding, but even before they surged the state of computing was becoming a smoldering wreck. Maybe those "leaders" just had bad judgment all along? IIRC most of them were either rubber-stamping or looking away from the IoT dumpster fire and organizing their curricula around the idea the users can't handle URLs responsibly.
-
@glyph Something that has gotten under my skin for the past year or so is seeing code changes like: large refactors, porting a legacy tool to rust, even minor bugfixes - things that would be a struggle to push through the inertia of code review - get fast tracked when "the AI did it." Like the exact PRs I've written and tried to advocate before and eventually gave up on. The changes and their risks are the same, I can only conclude that the bar is lower for accepting "AI" contributions.
-
What I've observed very recently is that even intelligent people, experienced developers - who know perfectly well that LLMs are just generators of text from statistical models of what someone is likely to write - will still pull up AI written search results and proceed on the automatic assumption that whatever they say is correct.
That is not a general observation. That was this morning, with some senior programmers trying to solve a problem that's prolonging a code freeze.
-
I don't want to be a catastrophist but every day I am politely asking "this seems like it might be incredibly toxic brain poison. I don't think I want to use something that could be a brain poison. could you show me some data that indicates it's safe?" And this request is ignored. No study has come out showing it *IS* a brain poison, but there are definitely a few that show it might be, and nothing in the way of a *successful* safety test.
@glyph you know what that reminds me of?
Bloodletting and handwashing
-
@glyph i've used the term "neural asbestos" before and it feels a lot like that may be the type of thing we're dealing with
And yet Doctorow thinks LLMs are great for him to use for copyediting. Maybe find a less hypocritical person to quote. All Gen AI horrifies me, I visualize environmental destruction with every "prompt."
@kirakira @glyph
https://floss.social/@sstendahl/116220713455956161