Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Why AI writing is so generic, boring, and dangerous: Semantic ablation.

Why AI writing is so generic, boring, and dangerous: Semantic ablation.

Scheduled Pinned Locked Moved Uncategorized
43 Posts 37 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • cstross@wandering.shopC cstross@wandering.shop

    Why AI writing is so generic, boring, and dangerous: Semantic ablation.

    (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

    Link Preview Image
    Semantic ablation: Why AI writing is boring and dangerous

    opinion: The subtractive bias we're ignoring

    favicon

    (www.theregister.com)

    greenviv@social.vivaldi.netG This user is from outside of this forum
    greenviv@social.vivaldi.netG This user is from outside of this forum
    greenviv@social.vivaldi.net
    wrote last edited by
    #20

    @cstross It is impossible to replace the human experience with a machine. The moment by its nature is sancrosanct; it's only in this atmosphere of gaming real estate insanity where life's nature is just another bitcoin to earn where we have lost our way.

    1 Reply Last reply
    0
    • cstross@wandering.shopC cstross@wandering.shop

      Why AI writing is so generic, boring, and dangerous: Semantic ablation.

      (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

      Link Preview Image
      Semantic ablation: Why AI writing is boring and dangerous

      opinion: The subtractive bias we're ignoring

      favicon

      (www.theregister.com)

      ghostonthehalfshell@masto.aiG This user is from outside of this forum
      ghostonthehalfshell@masto.aiG This user is from outside of this forum
      ghostonthehalfshell@masto.ai
      wrote last edited by
      #21

      @cstross

      I can’t help seeing in that elements of 1984 where Orwell describes successive reduction in vocabulary with the intended goal of making rebellious thought impossible

      1 Reply Last reply
      0
      • cstross@wandering.shopC cstross@wandering.shop

        Why AI writing is so generic, boring, and dangerous: Semantic ablation.

        (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

        Link Preview Image
        Semantic ablation: Why AI writing is boring and dangerous

        opinion: The subtractive bias we're ignoring

        favicon

        (www.theregister.com)

        daveduchene@mstdn.caD This user is from outside of this forum
        daveduchene@mstdn.caD This user is from outside of this forum
        daveduchene@mstdn.ca
        wrote last edited by
        #22

        @cstross the new Newspeak

        1 Reply Last reply
        0
        • cstross@wandering.shopC cstross@wandering.shop

          Why AI writing is so generic, boring, and dangerous: Semantic ablation.

          (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

          Link Preview Image
          Semantic ablation: Why AI writing is boring and dangerous

          opinion: The subtractive bias we're ignoring

          favicon

          (www.theregister.com)

          johannab@cosocial.caJ This user is from outside of this forum
          johannab@cosocial.caJ This user is from outside of this forum
          johannab@cosocial.ca
          wrote last edited by
          #23

          @cstross neat article, thanks.

          I had a realization a while ago that LLM writing came at me with the same vibe I caught when I was briefly a teacher, and again in the workplace, where I dealt with people who had unacknowledged literacy challenges. Young folks who assembled written work by cribbing from others and rearranging words “by shape” to fulfill the requirements - always managed to convey zero meaningful thought.

          1 Reply Last reply
          0
          • cstross@wandering.shopC cstross@wandering.shop

            Why AI writing is so generic, boring, and dangerous: Semantic ablation.

            (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

            Link Preview Image
            Semantic ablation: Why AI writing is boring and dangerous

            opinion: The subtractive bias we're ignoring

            favicon

            (www.theregister.com)

            _ryekdarkener_@mastodon.social_ This user is from outside of this forum
            _ryekdarkener_@mastodon.social_ This user is from outside of this forum
            _ryekdarkener_@mastodon.social
            wrote last edited by
            #24

            @cstross

            Of cause it does. So the result becomes more and more readable for the deliberately uneducated masses. Style? Content? Facts? Who cares?

            1 Reply Last reply
            0
            • cstross@wandering.shopC cstross@wandering.shop

              Why AI writing is so generic, boring, and dangerous: Semantic ablation.

              (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

              Link Preview Image
              Semantic ablation: Why AI writing is boring and dangerous

              opinion: The subtractive bias we're ignoring

              favicon

              (www.theregister.com)

              netraven@hear-me.socialN This user is from outside of this forum
              netraven@hear-me.socialN This user is from outside of this forum
              netraven@hear-me.social
              wrote last edited by
              #25

              @cstross

              If you use an LLM to make “objective” decisions or treat it like a reliable partner, you’re almost inevitably stepping into a script that you did not consent to: the optimized, legible, rational agent who behaves in ways that are easy to narrate and evaluate. If you step outside of that script, you can only be framed as incoherent.

              That style can masquerade as truth because humans are pattern-matchers: we often read smoothness as competence and friction as failure. But rupture in the form of contradiction, uncertainty, “I don’t know yet,” or grief that doesn’t resolve is often is the truthful shape of the thing itself.

              AI is part of the apparatus that makes truth feel like an aesthetic choice instead of a rupture. That optimization function operates as capture because it encourages you to keep talking to the AI in its format, where pain becomes language and language becomes manageable.

              The only solution is to refuse legibility.

              It's already beginning, where people speak the same words as always, but they don't mean the same things anymore from person to person.

              New information from feedback that doesn't fit another's collapsed constraints for abstraction... can only be perceived as a threat. Because If you demand truth from a system whose objective is stability under stress, it will treat truth as destabilizing noise.

              Reality is what makes a claim expensive. A model tries to make a claim cheap.

              Systems that treat closure as safety will converge to smooth, repeatable outputs that erase the remainder. A useful intervention is one that increases the observer’s ability to detect and resist premature convergence by exposing the hidden cost of smoothness and reinstating a legitimate place for uncertainty, contradiction, and falsifiability. But the intervention only remains non-doctrinal if it produces discriminative practice, not portable slogans.

              1 Reply Last reply
              0
              • cstross@wandering.shopC cstross@wandering.shop

                Why AI writing is so generic, boring, and dangerous: Semantic ablation.

                (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

                Link Preview Image
                Semantic ablation: Why AI writing is boring and dangerous

                opinion: The subtractive bias we're ignoring

                favicon

                (www.theregister.com)

                stompyrobot@mastodon.gamedev.placeS This user is from outside of this forum
                stompyrobot@mastodon.gamedev.placeS This user is from outside of this forum
                stompyrobot@mastodon.gamedev.place
                wrote last edited by
                #26

                @cstross by putting a measurable number on this feature, you have now made it possible to train out!

                1 Reply Last reply
                0
                • cstross@wandering.shopC cstross@wandering.shop

                  Why AI writing is so generic, boring, and dangerous: Semantic ablation.

                  (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

                  Link Preview Image
                  Semantic ablation: Why AI writing is boring and dangerous

                  opinion: The subtractive bias we're ignoring

                  favicon

                  (www.theregister.com)

                  apostateenglishman@mastodon.worldA This user is from outside of this forum
                  apostateenglishman@mastodon.worldA This user is from outside of this forum
                  apostateenglishman@mastodon.world
                  wrote last edited by
                  #27

                  @cstross I've previously described LLM-generated text as reading like "a middle management memo that no-one bothers reading". 🤷🏻‍♂️

                  1 Reply Last reply
                  0
                  • cstross@wandering.shopC cstross@wandering.shop

                    Why AI writing is so generic, boring, and dangerous: Semantic ablation.

                    (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

                    Link Preview Image
                    Semantic ablation: Why AI writing is boring and dangerous

                    opinion: The subtractive bias we're ignoring

                    favicon

                    (www.theregister.com)

                    kitkat_blue@mastodon.socialK This user is from outside of this forum
                    kitkat_blue@mastodon.socialK This user is from outside of this forum
                    kitkat_blue@mastodon.social
                    wrote last edited by
                    #28

                    @cstross

                    Gen ai and llms are the tools of fascism.

                    How?

                    Through ENSTUPIFACATION.

                    1 Reply Last reply
                    0
                    • cstross@wandering.shopC cstross@wandering.shop

                      Why AI writing is so generic, boring, and dangerous: Semantic ablation.

                      (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

                      Link Preview Image
                      Semantic ablation: Why AI writing is boring and dangerous

                      opinion: The subtractive bias we're ignoring

                      favicon

                      (www.theregister.com)

                      atax1a@infosec.exchangeA This user is from outside of this forum
                      atax1a@infosec.exchangeA This user is from outside of this forum
                      atax1a@infosec.exchange
                      wrote last edited by
                      #29

                      @cstross it's the textual equivalent of prions

                      1 Reply Last reply
                      0
                      • cstross@wandering.shopC cstross@wandering.shop

                        Why AI writing is so generic, boring, and dangerous: Semantic ablation.

                        (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

                        Link Preview Image
                        Semantic ablation: Why AI writing is boring and dangerous

                        opinion: The subtractive bias we're ignoring

                        favicon

                        (www.theregister.com)

                        sassinake@mastodon.socialS This user is from outside of this forum
                        sassinake@mastodon.socialS This user is from outside of this forum
                        sassinake@mastodon.social
                        wrote last edited by
                        #30

                        @cstross

                        nicely described by Orwell as

                        'NewSpeak'

                        kelleynnn@mas.toK 1 Reply Last reply
                        0
                        • cstross@wandering.shopC cstross@wandering.shop

                          Why AI writing is so generic, boring, and dangerous: Semantic ablation.

                          (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

                          Link Preview Image
                          Semantic ablation: Why AI writing is boring and dangerous

                          opinion: The subtractive bias we're ignoring

                          favicon

                          (www.theregister.com)

                          0gust1@merveilles.town0 This user is from outside of this forum
                          0gust1@merveilles.town0 This user is from outside of this forum
                          0gust1@merveilles.town
                          wrote last edited by
                          #31

                          @cstross Neural networks, by mathematical nature, are lossy information-compressing artefacts !

                          1 Reply Last reply
                          0
                          • cstross@wandering.shopC cstross@wandering.shop

                            Why AI writing is so generic, boring, and dangerous: Semantic ablation.

                            (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

                            Link Preview Image
                            Semantic ablation: Why AI writing is boring and dangerous

                            opinion: The subtractive bias we're ignoring

                            favicon

                            (www.theregister.com)

                            phil_stevens@mastodon.nzP This user is from outside of this forum
                            phil_stevens@mastodon.nzP This user is from outside of this forum
                            phil_stevens@mastodon.nz
                            wrote last edited by
                            #32

                            @cstross No surprise that we see the textual equivalent of mad cow disease.

                            1 Reply Last reply
                            0
                            • cstross@wandering.shopC cstross@wandering.shop

                              @malice @JdeBP The Register is a news site: everything has to be flensed and filed down to fit in a standard format and voice. That piece is probably all that's left of an original that was three times the length.

                              pkraus@berlin.socialP This user is from outside of this forum
                              pkraus@berlin.socialP This user is from outside of this forum
                              pkraus@berlin.social
                              wrote last edited by
                              #33

                              @malice @JdeBP @cstross That's fair. However, repeatedly including certain types of sentence construction - appealing or not - makes it look dodgy. Or just trolling. 😉

                              cstross@wandering.shopC 1 Reply Last reply
                              0
                              • pkraus@berlin.socialP pkraus@berlin.social

                                @malice @JdeBP @cstross That's fair. However, repeatedly including certain types of sentence construction - appealing or not - makes it look dodgy. Or just trolling. 😉

                                cstross@wandering.shopC This user is from outside of this forum
                                cstross@wandering.shopC This user is from outside of this forum
                                cstross@wandering.shop
                                wrote last edited by
                                #34

                                @pkraus @malice @JdeBP I've been reading The Reg since 1997 or thereabouts. Their house style has history behind it, not LLMs. (I suspect they'd cop to trolling from time to time, though.)

                                1 Reply Last reply
                                0
                                • cstross@wandering.shopC cstross@wandering.shop

                                  Why AI writing is so generic, boring, and dangerous: Semantic ablation.

                                  (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

                                  Link Preview Image
                                  Semantic ablation: Why AI writing is boring and dangerous

                                  opinion: The subtractive bias we're ignoring

                                  favicon

                                  (www.theregister.com)

                                  noodlemaz@mstdn.gamesN This user is from outside of this forum
                                  noodlemaz@mstdn.gamesN This user is from outside of this forum
                                  noodlemaz@mstdn.games
                                  wrote last edited by
                                  #35

                                  @cstross ironically got a Google cloud genAI and ML ad right in the middle of that.

                                  1 Reply Last reply
                                  0
                                  • cstross@wandering.shopC cstross@wandering.shop

                                    Why AI writing is so generic, boring, and dangerous: Semantic ablation.

                                    (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

                                    Link Preview Image
                                    Semantic ablation: Why AI writing is boring and dangerous

                                    opinion: The subtractive bias we're ignoring

                                    favicon

                                    (www.theregister.com)

                                    J This user is from outside of this forum
                                    J This user is from outside of this forum
                                    jmj@hachyderm.io
                                    wrote last edited by
                                    #36

                                    @cstross hmmm, that might also explain why AI seems more effective for code.
                                    For the most part you want a reversion to the mean in code. Novel solutions are only needed at the cutting edge where you trying to make the computer do something that’s not been done before.

                                    cstross@wandering.shopC 1 Reply Last reply
                                    0
                                    • J jmj@hachyderm.io

                                      @cstross hmmm, that might also explain why AI seems more effective for code.
                                      For the most part you want a reversion to the mean in code. Novel solutions are only needed at the cutting edge where you trying to make the computer do something that’s not been done before.

                                      cstross@wandering.shopC This user is from outside of this forum
                                      cstross@wandering.shopC This user is from outside of this forum
                                      cstross@wandering.shop
                                      wrote last edited by
                                      #37

                                      @Jmj Yes. Also I suspect the semantic expressiveness of programming languages is far narrower than that of human languages: they're more precise, but it's much harder (though not impossible!) to write poetry in them. So there's less risk of losing something unique by generating output that tends to occupy the middle of the bell curve.

                                      perigrin@ack.nerdfight.onlineP 1 Reply Last reply
                                      0
                                      • cstross@wandering.shopC cstross@wandering.shop

                                        @Jmj Yes. Also I suspect the semantic expressiveness of programming languages is far narrower than that of human languages: they're more precise, but it's much harder (though not impossible!) to write poetry in them. So there's less risk of losing something unique by generating output that tends to occupy the middle of the bell curve.

                                        perigrin@ack.nerdfight.onlineP This user is from outside of this forum
                                        perigrin@ack.nerdfight.onlineP This user is from outside of this forum
                                        perigrin@ack.nerdfight.online
                                        wrote last edited by
                                        #38
                                        @cstross @Jmj I mean I think one could make a coherent argument that programming *is* poetry: reduced syntax, enforced structure, heavy use of metaphor…

                                        It’s just most programming topics make Vogon poetry look exciting.
                                        J 1 Reply Last reply
                                        0
                                        • cstross@wandering.shopC cstross@wandering.shop

                                          Why AI writing is so generic, boring, and dangerous: Semantic ablation.

                                          (We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses.)

                                          Link Preview Image
                                          Semantic ablation: Why AI writing is boring and dangerous

                                          opinion: The subtractive bias we're ignoring

                                          favicon

                                          (www.theregister.com)

                                          rowat_c@mastodon.socialR This user is from outside of this forum
                                          rowat_c@mastodon.socialR This user is from outside of this forum
                                          rowat_c@mastodon.social
                                          wrote last edited by
                                          #39

                                          @cstross "Model collapse", Shumailov, Shumaylov & Papernot (2024), Nature : https://www.nature.com/articles/s41586-024-07566-y

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups