Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Don't anthropomorphize LLMs, language is important.

Don't anthropomorphize LLMs, language is important.

Scheduled Pinned Locked Moved Uncategorized
19 Posts 16 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • gabrielesvelto@mas.toG gabrielesvelto@mas.to

    Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

    andres4ny@social.ridetrans.itA This user is from outside of this forum
    andres4ny@social.ridetrans.itA This user is from outside of this forum
    andres4ny@social.ridetrans.it
    wrote last edited by
    #2

    @gabrielesvelto I mean, it doesn't help that the bots are doing this bullshit: https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-12-silence-in-open-source-a-reflection.html

    This is clearly intended to trick humans.

    kinou@lgbtqia.spaceK 1 Reply Last reply
    0
    • gabrielesvelto@mas.toG gabrielesvelto@mas.to

      Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

      assimilateborg@kind.socialA This user is from outside of this forum
      assimilateborg@kind.socialA This user is from outside of this forum
      assimilateborg@kind.social
      wrote last edited by
      #3

      @gabrielesvelto ➡️ I already don't 🚀 read text ✏️ which looks 👀 like this one.❗

      1 Reply Last reply
      0
      • R relay@relay.publicsquare.global shared this topic
      • andres4ny@social.ridetrans.itA andres4ny@social.ridetrans.it

        @gabrielesvelto I mean, it doesn't help that the bots are doing this bullshit: https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-12-silence-in-open-source-a-reflection.html

        This is clearly intended to trick humans.

        kinou@lgbtqia.spaceK This user is from outside of this forum
        kinou@lgbtqia.spaceK This user is from outside of this forum
        kinou@lgbtqia.space
        wrote last edited by
        #4

        @Andres4NY

        @gabrielesvelto

        I might have missed a chapter but my interpretation is someone has prompted their llm to generate this text and then posted it no? The way I saw this narrated is like the llm reacted to the prompt "PR closed" by creating a blog post. But to do that, you need an human operator no?

        gabrielesvelto@mas.toG gbargoud@masto.nycG 2 Replies Last reply
        0
        • R relay@relay.mycrowd.ca shared this topic
        • gabrielesvelto@mas.toG gabrielesvelto@mas.to

          Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

          bit@ohai.socialB This user is from outside of this forum
          bit@ohai.socialB This user is from outside of this forum
          bit@ohai.social
          wrote last edited by
          #5

          @gabrielesvelto Even describing their errors as hallucinations is the same attempt to humanize it.

          gabrielesvelto@mas.toG 1 Reply Last reply
          0
          • kinou@lgbtqia.spaceK kinou@lgbtqia.space

            @Andres4NY

            @gabrielesvelto

            I might have missed a chapter but my interpretation is someone has prompted their llm to generate this text and then posted it no? The way I saw this narrated is like the llm reacted to the prompt "PR closed" by creating a blog post. But to do that, you need an human operator no?

            gabrielesvelto@mas.toG This user is from outside of this forum
            gabrielesvelto@mas.toG This user is from outside of this forum
            gabrielesvelto@mas.to
            wrote last edited by
            #6

            @kinou @Andres4NY not necessarily, or at least not as a follow-up. The operator might have primed the bot to follow this course of action in the original prompt, and included all the necessary permissions to let it publish the generated post automatically.

            andres4ny@social.ridetrans.itA 1 Reply Last reply
            0
            • bit@ohai.socialB bit@ohai.social

              @gabrielesvelto Even describing their errors as hallucinations is the same attempt to humanize it.

              gabrielesvelto@mas.toG This user is from outside of this forum
              gabrielesvelto@mas.toG This user is from outside of this forum
              gabrielesvelto@mas.to
              wrote last edited by
              #7

              @bit absolutely, and it gives people the impression that they have failure modes, which they don't. Their output is text which they cannot verify, so whether the text is factually right or wrong is irrelevant. Both are valid and completely expected outputs.

              1 Reply Last reply
              0
              • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                crovanian@mastodon.socialC This user is from outside of this forum
                crovanian@mastodon.socialC This user is from outside of this forum
                crovanian@mastodon.social
                wrote last edited by
                #8

                @gabrielesvelto “This Document Contains Machine Generated Text” but it’s a pair of knuckle dusters with typewriter caps.
                The document is yo binch as

                1 Reply Last reply
                0
                • kinou@lgbtqia.spaceK kinou@lgbtqia.space

                  @Andres4NY

                  @gabrielesvelto

                  I might have missed a chapter but my interpretation is someone has prompted their llm to generate this text and then posted it no? The way I saw this narrated is like the llm reacted to the prompt "PR closed" by creating a blog post. But to do that, you need an human operator no?

                  gbargoud@masto.nycG This user is from outside of this forum
                  gbargoud@masto.nycG This user is from outside of this forum
                  gbargoud@masto.nyc
                  wrote last edited by
                  #9

                  @kinou @Andres4NY @gabrielesvelto

                  Not necessarily, it just needs access to a blog post making API and some training data that got it to auto complete "I got my PR rejected because it was garbage" with "and then wrote a blog post about it".

                  A lot of people have provided that training data

                  1 Reply Last reply
                  0
                  • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                    Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                    giacomo@snac.tesio.itG This user is from outside of this forum
                    giacomo@snac.tesio.itG This user is from outside of this forum
                    giacomo@snac.tesio.it
                    wrote last edited by
                    #10
                    @gabrielesvelto@mas.to

                    Even talking about "text", in the context of #LLM, is a subtle anthropomorphization.

                    Text is a sequence of symbols used by human minds to express information that they want to syncronize a little with other human minds (aka communicate).

                    Such syncronization is always partial and imperfect, since each mind has different experiences and informations that will integrate the new message, but it's good enough to allow humanity to collaborate and to build culture and science.

                    A statistically programmed software has no mind, so even when it's optimized to produce output that can fool a human and pass the #Turing test, such output hold no meaning, since no human experience or thought is expressed there.

                    It's just the partial decompression of a lossy compression of a huge amount of text. And if it wasn't enough to show the lack of any meaning, the decompression process includes random input that is there to provide the illusion of autonomy.

                    So instead of "the AI replied" I'd suggest "the bot computed this output" and instead of "this work is AI-assisted" I'd suggest "this is statistically computed output".
                    1 Reply Last reply
                    0
                    • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                      @kinou @Andres4NY not necessarily, or at least not as a follow-up. The operator might have primed the bot to follow this course of action in the original prompt, and included all the necessary permissions to let it publish the generated post automatically.

                      andres4ny@social.ridetrans.itA This user is from outside of this forum
                      andres4ny@social.ridetrans.itA This user is from outside of this forum
                      andres4ny@social.ridetrans.it
                      wrote last edited by
                      #11

                      @gabrielesvelto @kinou Yeah, it's unclear how much of this is human-directed, and how much is automated. Like, if a bot is trained on aggressive attempts to get patches merged, then that's the behavior it will emulate. Or an actual human could be directing it to act like an asshole in an attempt to get patches merged.

                      1 Reply Last reply
                      0
                      • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                        Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                        irfelixr@discuss.systemsI This user is from outside of this forum
                        irfelixr@discuss.systemsI This user is from outside of this forum
                        irfelixr@discuss.systems
                        wrote last edited by
                        #12

                        @gabrielesvelto
                        Yes 💯

                        1 Reply Last reply
                        0
                        • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                          Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                          nini@oldbytes.spaceN This user is from outside of this forum
                          nini@oldbytes.spaceN This user is from outside of this forum
                          nini@oldbytes.space
                          wrote last edited by
                          #13

                          @gabrielesvelto "This is digital noise your brain perceives as words like a paredolic blob or a shadow cast on a wall. Do not interpret it as anything other than dirt smears on the window of reality that reminds you of information."

                          1 Reply Last reply
                          0
                          • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                            Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                            mark@mastodon.fixermark.comM This user is from outside of this forum
                            mark@mastodon.fixermark.comM This user is from outside of this forum
                            mark@mastodon.fixermark.com
                            wrote last edited by
                            #14

                            @gabrielesvelto We can try, but you're admonishing a species that talks to potted plants and holds one-sided conversations with washing machines.

                            It's gonna be a steep hill, is what I'm saying.

                            jrdepriest@infosec.exchangeJ 1 Reply Last reply
                            0
                            • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                              Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                              cb@social.lolC This user is from outside of this forum
                              cb@social.lolC This user is from outside of this forum
                              cb@social.lol
                              wrote last edited by
                              #15

                              @gabrielesvelto The other day my wife showed me a video of ChatGPT communicating with a male voice. At first, I referred to "him" and immediately corrected that to "it."

                              1 Reply Last reply
                              0
                              • mark@mastodon.fixermark.comM mark@mastodon.fixermark.com

                                @gabrielesvelto We can try, but you're admonishing a species that talks to potted plants and holds one-sided conversations with washing machines.

                                It's gonna be a steep hill, is what I'm saying.

                                jrdepriest@infosec.exchangeJ This user is from outside of this forum
                                jrdepriest@infosec.exchangeJ This user is from outside of this forum
                                jrdepriest@infosec.exchange
                                wrote last edited by
                                #16

                                @mark @gabrielesvelto

                                At least potted plants are living things.

                                And nobody tries to say a washing machine will magically birth AGI (as far as I know).

                                It's not the "talking to things" part that's madness. It's the belief that a machine that can match tokens and spit out some text that resembles a valid reply is a sign of true intelligence.

                                When I punch in 5 * 5 into a calculator and hit =, I shouldn't ascribe the glowing 25 to any machine intelligence. It should be the same for LLM powered genAI, but that "natural language" throws us off. Our brains aren't used to dealing with (often) coherent language generated by an unthinking statistical engine doing math on giant matrices.

                                1 Reply Last reply
                                0
                                • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                                  Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                                  orangefloss@mastodon.socialO This user is from outside of this forum
                                  orangefloss@mastodon.socialO This user is from outside of this forum
                                  orangefloss@mastodon.social
                                  wrote last edited by
                                  #17

                                  @gabrielesvelto couldn’t agree more with this ethic. The psychological impacts of users ie society believing that LLMs are people and fufilling roles that actual humans should, will probably unfold over the years and decades. All because regulators circa 2024/5/6 believed it was over reach to demand LLMs don’t use anthropomorphic language and narrative style. Prompt: “what do you think?” Reply: “there is no “I”. This is a machine generated response, not a conscious self.” - sounds better to me.

                                  1 Reply Last reply
                                  0
                                  • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                                    Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                                    rupert@mastodon.nzR This user is from outside of this forum
                                    rupert@mastodon.nzR This user is from outside of this forum
                                    rupert@mastodon.nz
                                    wrote last edited by
                                    #18

                                    @gabrielesvelto I'm trying to get people to use the neologism "apokrisoid" for an answer-shaped object. The LLM does not and cannot produce actual answers.
                                    #apokrisoid

                                    1 Reply Last reply
                                    0
                                    • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                                      Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                                      opalideas@mindly.socialO This user is from outside of this forum
                                      opalideas@mindly.socialO This user is from outside of this forum
                                      opalideas@mindly.social
                                      wrote last edited by
                                      #19

                                      @gabrielesvelto Exactly. But the media (and hence the public) like to use short-forms, whether accurate of not. I do a presentation to folks about AI (The Good, The Bad and The Ugly), after which everybody keeps referring to "AI", not machine language. !!!!!

                                      1 Reply Last reply
                                      1
                                      0
                                      Reply
                                      • Reply as topic
                                      Log in to reply
                                      • Oldest to Newest
                                      • Newest to Oldest
                                      • Most Votes


                                      • Login

                                      • Login or register to search.
                                      • First post
                                        Last post
                                      0
                                      • Categories
                                      • Recent
                                      • Tags
                                      • Popular
                                      • World
                                      • Users
                                      • Groups