Why AI writing is so generic, boring, and dangerous: Semantic ablation.
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
Really great article, thanks for posting.
I know those who use ChatGPT to improve professional work letters. AI ChatGPT doesn't "improve" writing to refine meaning and nuance, it is designed to dumb down writing in successive loops to the most middling 6th grade level of comprehension thereby "improving" its "reach". It's the specifically designed feature of AI "editing".
I recently read a ChatGPT "improved" letter that may violate the Civil Rights Act by using a term that is a red flag for a lawsuit. I alerted the sender who confessed to using ChatGPT because her boss uses it and raves about it.
An ungraceful letter that keeps you legal is safer than a "clean" "professional" GPT letter that gets you sued. Know the risks!
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
@cstross Blandness as a service.
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
@cstross This gives some formal insight into the feeling I have when reading AI extrusion, which is that my eyes slide right off it. It's like eating one of the varieties of apple that having been excessively bred for appearance and not taste, the promise of substance so at odds with reality that my brain revolts.
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
@cstross that explains why everything ends up sounding like a middle management wannabe on LinkedIn

-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
@cstross "beigification' is aspect of the recursive pollution problem
Recursive Pollution and Model Collapse Are Not the Same | BIML
Forever ago in 2020, we identified "looping" as one of the "raw data in the world" risks. See An Architectural Risk Anal
Berryville Institute of Machine Learning (berryvilleiml.com)
-
R relay@relay.an.exchange shared this topic
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
@cstross Well, that explains why every time I tried #AI for improving a text snippet, I was very disappointed.
It always wants to convince me that I should remove any uncommon sentence structures and replace them with generic ones, which often removes any personality from written text.
If you ask it to make a small addition to an existing text, it likes to rephrase everything in a more generic way and is unable to add a subtext layer. Honestly, it's just useless for writing in my opinion.
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
@cstross this is indeed a very neat explanation why the best possible outcome of an LLM is still terrible.
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
@cstross As someone who, next to having a little knowledge about LLMs, was once complemented for choice of words by a native-speaking professor I hung around with at a conference for a few days, I am not surprised about this LLM fact. The professor however then was somewhat surprised of his own uttering and continued "but I had a few beers".
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
@cstross “semantic ablation” is a concise way to describe that feeling of “I just read all the words but I can’t tell what they are trying to convey” that I have gotten after reading certain generated snippets of text.
-
Or in other words "Why AI is writing in corporate speech"
Depends what level of the corporation you''re speaking to. Management and marketing hype is in some ways the opposite, with a heavy use of "signaling" words that serve little informational purpose, rather they are meant to leave an impression. Think "disruptive", or "leverage" used as a verb...
-
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
@cstross there’s worse related things.
We come to see anything that the AI can not and does not produce as invalid and thus reading these bullshit, taupe texts shrinks our creative range, our sense of the possible, and our willingness to forge our own path or follow someone else blazing their own.
Narrowing the range of semantics to an average is one thing.
Strangling our range of ideas is another.
-
Depends what level of the corporation you''re speaking to. Management and marketing hype is in some ways the opposite, with a heavy use of "signaling" words that serve little informational purpose, rather they are meant to leave an impression. Think "disruptive", or "leverage" used as a verb...
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
@cstross It is impossible to replace the human experience with a machine. The moment by its nature is sancrosanct; it's only in this atmosphere of gaming real estate insanity where life's nature is just another bitcoin to earn where we have lost our way.
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
I can’t help seeing in that elements of 1984 where Orwell describes successive reduction in vocabulary with the intended goal of making rebellious thought impossible
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
@cstross the new Newspeak
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
@cstross neat article, thanks.
I had a realization a while ago that LLM writing came at me with the same vibe I caught when I was briefly a teacher, and again in the workplace, where I dealt with people who had unacknowledged literacy challenges. Young folks who assembled written work by cribbing from others and rearranging words “by shape” to fulfill the requirements - always managed to convey zero meaningful thought.
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
Of cause it does. So the result becomes more and more readable for the deliberately uneducated masses. Style? Content? Facts? Who cares?
-
Why AI writing is so generic, boring, and dangerous: Semantic ablation.
(We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)
Semantic ablation: Why AI writing is boring and dangerous
opinion: The subtractive bias we're ignoring
(www.theregister.com)
If you use an LLM to make “objective” decisions or treat it like a reliable partner, you’re almost inevitably stepping into a script that you did not consent to: the optimized, legible, rational agent who behaves in ways that are easy to narrate and evaluate. If you step outside of that script, you can only be framed as incoherent.
That style can masquerade as truth because humans are pattern-matchers: we often read smoothness as competence and friction as failure. But rupture in the form of contradiction, uncertainty, “I don’t know yet,” or grief that doesn’t resolve is often is the truthful shape of the thing itself.
AI is part of the apparatus that makes truth feel like an aesthetic choice instead of a rupture. That optimization function operates as capture because it encourages you to keep talking to the AI in its format, where pain becomes language and language becomes manageable.
The only solution is to refuse legibility.
It's already beginning, where people speak the same words as always, but they don't mean the same things anymore from person to person.
New information from feedback that doesn't fit another's collapsed constraints for abstraction... can only be perceived as a threat. Because If you demand truth from a system whose objective is stability under stress, it will treat truth as destabilizing noise.
Reality is what makes a claim expensive. A model tries to make a claim cheap.
Systems that treat closure as safety will converge to smooth, repeatable outputs that erase the remainder. A useful intervention is one that increases the observer’s ability to detect and resist premature convergence by exposing the hidden cost of smoothness and reinstating a legitimate place for uncertainty, contradiction, and falsifiability. But the intervention only remains non-doctrinal if it produces discriminative practice, not portable slogans.
ticks skewed row:bots metrics