Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

Scheduled Pinned Locked Moved Uncategorized
10 Posts 10 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • thomasfuchs@hachyderm.ioT This user is from outside of this forum
    thomasfuchs@hachyderm.ioT This user is from outside of this forum
    thomasfuchs@hachyderm.io
    wrote last edited by
    #1

    For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

    Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

    They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

    If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

    madengineering@mastodon.cloudM sinvega@mas.toS cora@hachyderm.ioC michaelgemar@cosocial.caM tambourineman@mastodon.cloudT 8 Replies Last reply
    1
    0
    • thomasfuchs@hachyderm.ioT thomasfuchs@hachyderm.io

      For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

      Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

      They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

      If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

      madengineering@mastodon.cloudM This user is from outside of this forum
      madengineering@mastodon.cloudM This user is from outside of this forum
      madengineering@mastodon.cloud
      wrote last edited by
      #2

      @thomasfuchs Lately they've taken the distinctly stupid idea of letting the chat bot effectively type commands directly into your shell and have them execute as if you typed them yourself, and just telling it not to type certain commands. Which it doesn't understand and does anyway.

      1 Reply Last reply
      0
      • thomasfuchs@hachyderm.ioT thomasfuchs@hachyderm.io

        For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

        Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

        They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

        If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

        sinvega@mas.toS This user is from outside of this forum
        sinvega@mas.toS This user is from outside of this forum
        sinvega@mas.to
        wrote last edited by
        #3

        @thomasfuchs I really, really wish people would stop with "hallucinated" when "fabricated" is both right there and more accurate

        nimro@hachyderm.ioN 1 Reply Last reply
        0
        • thomasfuchs@hachyderm.ioT thomasfuchs@hachyderm.io

          For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

          Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

          They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

          If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

          cora@hachyderm.ioC This user is from outside of this forum
          cora@hachyderm.ioC This user is from outside of this forum
          cora@hachyderm.io
          wrote last edited by
          #4

          @thomasfuchs Frankly I think it’s more plausible to describe the thought process of many humans in terms of token assemblage than the other way around.

          1 Reply Last reply
          0
          • thomasfuchs@hachyderm.ioT thomasfuchs@hachyderm.io

            For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

            Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

            They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

            If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

            michaelgemar@cosocial.caM This user is from outside of this forum
            michaelgemar@cosocial.caM This user is from outside of this forum
            michaelgemar@cosocial.ca
            wrote last edited by
            #5

            @thomasfuchs @WeirdWriter I really think that regulations should insist that LLMs software be configured to not refer to “itself” with personal pronouns, or imply it has emotional states, or all the other rhetorical tricks they have been programmed to use to appear “human”.

            1 Reply Last reply
            0
            • thomasfuchs@hachyderm.ioT thomasfuchs@hachyderm.io

              For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

              Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

              They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

              If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

              tambourineman@mastodon.cloudT This user is from outside of this forum
              tambourineman@mastodon.cloudT This user is from outside of this forum
              tambourineman@mastodon.cloud
              wrote last edited by
              #6

              @thomasfuchs We don't know what makes one wake up in the morning and decide to climb a mountain or quit their job.
              It may be some completely different process or there might be something to this pattern-matching statistical thing.
              Do ants have agency? Do ant colonies?

              We definitively must regulate the shit out of these big techs.
              But saying that X does not do Y when both are poorly understood and defined is not the way, IMO.

              1 Reply Last reply
              0
              • sinvega@mas.toS sinvega@mas.to

                @thomasfuchs I really, really wish people would stop with "hallucinated" when "fabricated" is both right there and more accurate

                nimro@hachyderm.ioN This user is from outside of this forum
                nimro@hachyderm.ioN This user is from outside of this forum
                nimro@hachyderm.io
                wrote last edited by
                #7

                @sinvega this paper makes a compelling case for using the academic term “bullshit” https://arxiv.org/abs/2507.07484

                1 Reply Last reply
                0
                • thomasfuchs@hachyderm.ioT thomasfuchs@hachyderm.io

                  For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

                  Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

                  They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

                  If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

                  frog_reborn@mstdn.socialF This user is from outside of this forum
                  frog_reborn@mstdn.socialF This user is from outside of this forum
                  frog_reborn@mstdn.social
                  wrote last edited by
                  #8

                  @thomasfuchs

                  The first two don't really make sense to me. A virus can "evade safeguards" and a meteorite can "destroy things", so I don't think there has to be much agency involved in the first place.

                  The latter seems more like a more fitting criticism, but in all three cases I'm also not sure how one were to phrase it alternatively.

                  1 Reply Last reply
                  0
                  • thomasfuchs@hachyderm.ioT thomasfuchs@hachyderm.io

                    For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

                    Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

                    They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

                    If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

                    eric_neue@indieweb.socialE This user is from outside of this forum
                    eric_neue@indieweb.socialE This user is from outside of this forum
                    eric_neue@indieweb.social
                    wrote last edited by
                    #9

                    @thomasfuchs i wish we could educate the public that LLMs would be more accurately described as “simulated intelligence,” but i can’t figure out how to explain the difference to normies at all.

                    1 Reply Last reply
                    0
                    • thomasfuchs@hachyderm.ioT thomasfuchs@hachyderm.io

                      For the 1,000th time: "AI" does not have agency and cannot think and cannot act.

                      Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".

                      They do literally only do one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.

                      If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.

                      S This user is from outside of this forum
                      S This user is from outside of this forum
                      slotos@toot.community
                      wrote last edited by
                      #10

                      @thomasfuchs You don’t need agency to evade safeguards, destroy things, or ignore instructions. `rm` can do it.

                      This is literally the mistake people you criticize are making - imbuing intent where there’s none.

                      The underlying tech had been apt at finding ways to circumvent feedback loops since before the bubble. This is constrained to the training phase, but with verification of commercial models being mathematically infeasible, these avoidance patterns are shipped directly to users.

                      1 Reply Last reply
                      0
                      • R relay@relay.publicsquare.global shared this topic
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups