Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. The constant mental vigilance in a generative world is exhausting.

The constant mental vigilance in a generative world is exhausting.

Scheduled Pinned Locked Moved Uncategorized
12 Posts 8 Posters 18 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • mttaggart@infosec.exchangeM mttaggart@infosec.exchange

    The constant mental vigilance in a generative world is exhausting.

    "I asked Claude to do $thing and it did this!"

    No it didn't. No you didn't. Probably none of that happened.

    And somehow, being unwilling to admit the thing is just making stuff up is annoying and unnecessary, not the damn model.

    jrdepriest@infosec.exchangeJ This user is from outside of this forum
    jrdepriest@infosec.exchangeJ This user is from outside of this forum
    jrdepriest@infosec.exchange
    wrote last edited by
    #2

    @mttaggart

    I can't get people to understand that the "hallucination" problem is unsolvable because "hallucination" is how it works. That's all it does. Next tokens based on the whole previous series of tokens that represent "the conversation" being had between prompts and responses combined with the hidden prompts that give the thing its flavor. The fact that it is "right" isn't part of it. That's why they never say, "I don't know". They don't know anything. They are literally making it up every single time. It's why they are so expensive and why they are ruining the environment. There is no recall, no memory, no "knowing". As I've seen it said elsewhere, "there is no 'there' there". It's worse than the Chinese Room thought experiment because at least that produces correct responses. This creates the illusion of a correct response. We are killing the earth and building an inescapable surveillance state around technology that will never get any better than it is right now.

    mttaggart@infosec.exchangeM fartnuggets@jorts.horseF fireye@peoplemaking.gamesF 3 Replies Last reply
    1
    0
    • mttaggart@infosec.exchangeM mttaggart@infosec.exchange shared this topic
    • jrdepriest@infosec.exchangeJ jrdepriest@infosec.exchange

      @mttaggart

      I can't get people to understand that the "hallucination" problem is unsolvable because "hallucination" is how it works. That's all it does. Next tokens based on the whole previous series of tokens that represent "the conversation" being had between prompts and responses combined with the hidden prompts that give the thing its flavor. The fact that it is "right" isn't part of it. That's why they never say, "I don't know". They don't know anything. They are literally making it up every single time. It's why they are so expensive and why they are ruining the environment. There is no recall, no memory, no "knowing". As I've seen it said elsewhere, "there is no 'there' there". It's worse than the Chinese Room thought experiment because at least that produces correct responses. This creates the illusion of a correct response. We are killing the earth and building an inescapable surveillance state around technology that will never get any better than it is right now.

      mttaggart@infosec.exchangeM This user is from outside of this forum
      mttaggart@infosec.exchangeM This user is from outside of this forum
      mttaggart@infosec.exchange
      wrote last edited by
      #3

      @jrdepriest The one that gets me is the "reasoning" models. They're just making up more text to fluff the context! No thought is happening, nor can it! It's maddening.

      jrdepriest@infosec.exchangeJ dannotdaniel@hellions.cloudD 2 Replies Last reply
      0
      • jrdepriest@infosec.exchangeJ jrdepriest@infosec.exchange

        @mttaggart

        I can't get people to understand that the "hallucination" problem is unsolvable because "hallucination" is how it works. That's all it does. Next tokens based on the whole previous series of tokens that represent "the conversation" being had between prompts and responses combined with the hidden prompts that give the thing its flavor. The fact that it is "right" isn't part of it. That's why they never say, "I don't know". They don't know anything. They are literally making it up every single time. It's why they are so expensive and why they are ruining the environment. There is no recall, no memory, no "knowing". As I've seen it said elsewhere, "there is no 'there' there". It's worse than the Chinese Room thought experiment because at least that produces correct responses. This creates the illusion of a correct response. We are killing the earth and building an inescapable surveillance state around technology that will never get any better than it is right now.

        fartnuggets@jorts.horseF This user is from outside of this forum
        fartnuggets@jorts.horseF This user is from outside of this forum
        fartnuggets@jorts.horse
        wrote last edited by
        #4

        @jrdepriest @mttaggart it's intelligence theatre, the appearance of words that resemble intellect. I can't describe how much it frustrates me that so many are happy to accept the illusion

        1 Reply Last reply
        0
        • jrdepriest@infosec.exchangeJ jrdepriest@infosec.exchange

          @mttaggart

          I can't get people to understand that the "hallucination" problem is unsolvable because "hallucination" is how it works. That's all it does. Next tokens based on the whole previous series of tokens that represent "the conversation" being had between prompts and responses combined with the hidden prompts that give the thing its flavor. The fact that it is "right" isn't part of it. That's why they never say, "I don't know". They don't know anything. They are literally making it up every single time. It's why they are so expensive and why they are ruining the environment. There is no recall, no memory, no "knowing". As I've seen it said elsewhere, "there is no 'there' there". It's worse than the Chinese Room thought experiment because at least that produces correct responses. This creates the illusion of a correct response. We are killing the earth and building an inescapable surveillance state around technology that will never get any better than it is right now.

          fireye@peoplemaking.gamesF This user is from outside of this forum
          fireye@peoplemaking.gamesF This user is from outside of this forum
          fireye@peoplemaking.games
          wrote last edited by
          #5

          @jrdepriest @mttaggart hallucination is an awful term for it; it implies a form of perception that is being undermined, when no such perception exists. A philosophy professor of mine refers to model output as "bullshit" in that it does not distinct between truth and falsehood, only seeking to accurately reproduce langauage patterns.

          crankylinuxuser@infosec.exchangeC 1 Reply Last reply
          0
          • mttaggart@infosec.exchangeM mttaggart@infosec.exchange

            The constant mental vigilance in a generative world is exhausting.

            "I asked Claude to do $thing and it did this!"

            No it didn't. No you didn't. Probably none of that happened.

            And somehow, being unwilling to admit the thing is just making stuff up is annoying and unnecessary, not the damn model.

            delta_vee@cosocial.caD This user is from outside of this forum
            delta_vee@cosocial.caD This user is from outside of this forum
            delta_vee@cosocial.ca
            wrote last edited by
            #6

            @mttaggart The required hypervigilance is exhausting and beyond human capacity to maintain, and so few will admit they can't do it (then there are the ones who take *pride* in refusing vigilance, and I consider them some kind of mad)

            1 Reply Last reply
            0
            • mttaggart@infosec.exchangeM mttaggart@infosec.exchange

              @jrdepriest The one that gets me is the "reasoning" models. They're just making up more text to fluff the context! No thought is happening, nor can it! It's maddening.

              jrdepriest@infosec.exchangeJ This user is from outside of this forum
              jrdepriest@infosec.exchangeJ This user is from outside of this forum
              jrdepriest@infosec.exchange
              wrote last edited by
              #7

              @mttaggart

              It's the same "model" your know-it-all uncle uses every Thanksgiving: bloviation.

              1 Reply Last reply
              0
              • mttaggart@infosec.exchangeM mttaggart@infosec.exchange

                @jrdepriest The one that gets me is the "reasoning" models. They're just making up more text to fluff the context! No thought is happening, nor can it! It's maddening.

                dannotdaniel@hellions.cloudD This user is from outside of this forum
                dannotdaniel@hellions.cloudD This user is from outside of this forum
                dannotdaniel@hellions.cloud
                wrote last edited by
                #8

                @mttaggart @jrdepriest it's just working backwards to "explain" it's bullshit answer, or so I heard.

                if so that's a straight up con.

                1 Reply Last reply
                0
                • R relay@relay.publicsquare.global shared this topic
                • mttaggart@infosec.exchangeM mttaggart@infosec.exchange

                  The constant mental vigilance in a generative world is exhausting.

                  "I asked Claude to do $thing and it did this!"

                  No it didn't. No you didn't. Probably none of that happened.

                  And somehow, being unwilling to admit the thing is just making stuff up is annoying and unnecessary, not the damn model.

                  crankylinuxuser@infosec.exchangeC This user is from outside of this forum
                  crankylinuxuser@infosec.exchangeC This user is from outside of this forum
                  crankylinuxuser@infosec.exchange
                  wrote last edited by
                  #9

                  @mttaggart

                  Thats what you and most AI haters miss.

                  LLMs are right like 90% or so at a time.

                  Even 80B models are right a big majority of the time. This is why normies and managerial staff are like "this is amazing". They dont know what they dont know.

                  Its that last 10% that breaks all sorts of stuff and people. And you have to be a specialist in that 10% to know when its bullshitting/hallucinating/lying to you.

                  ahltorp@mastodon.nuA 1 Reply Last reply
                  0
                  • fireye@peoplemaking.gamesF fireye@peoplemaking.games

                    @jrdepriest @mttaggart hallucination is an awful term for it; it implies a form of perception that is being undermined, when no such perception exists. A philosophy professor of mine refers to model output as "bullshit" in that it does not distinct between truth and falsehood, only seeking to accurately reproduce langauage patterns.

                    crankylinuxuser@infosec.exchangeC This user is from outside of this forum
                    crankylinuxuser@infosec.exchangeC This user is from outside of this forum
                    crankylinuxuser@infosec.exchange
                    wrote last edited by
                    #10

                    @fireye @jrdepriest @mttaggart

                    In reality, its high dimensional vector calculus, over the trained material and your words in your context.

                    All this LLM workings is just vector calculus, that Leibniz devised in the 1700s. We only had the compute to do it in 2012.

                    Its not thinking, yet. Its not intelligence. Its a stochastic parrot trained on TBs of data, following Leibniz' dream of word-calculus.

                    We still dont know how consciousness works, or to create a thinking machine.

                    1 Reply Last reply
                    0
                    • crankylinuxuser@infosec.exchangeC crankylinuxuser@infosec.exchange

                      @mttaggart

                      Thats what you and most AI haters miss.

                      LLMs are right like 90% or so at a time.

                      Even 80B models are right a big majority of the time. This is why normies and managerial staff are like "this is amazing". They dont know what they dont know.

                      Its that last 10% that breaks all sorts of stuff and people. And you have to be a specialist in that 10% to know when its bullshitting/hallucinating/lying to you.

                      ahltorp@mastodon.nuA This user is from outside of this forum
                      ahltorp@mastodon.nuA This user is from outside of this forum
                      ahltorp@mastodon.nu
                      wrote last edited by
                      #11

                      @crankylinuxuser @mttaggart It's not "right like 90% or so at a time". The process of "right" happens in your head. You do the interpretation of the data streaming from the model. The bots are nothing without a human at the other end, as a crutch, using a unidirectional, blatant exploitation of Grice's Cooperative Principle.

                      ahltorp@mastodon.nuA 1 Reply Last reply
                      0
                      • ahltorp@mastodon.nuA ahltorp@mastodon.nu

                        @crankylinuxuser @mttaggart It's not "right like 90% or so at a time". The process of "right" happens in your head. You do the interpretation of the data streaming from the model. The bots are nothing without a human at the other end, as a crutch, using a unidirectional, blatant exploitation of Grice's Cooperative Principle.

                        ahltorp@mastodon.nuA This user is from outside of this forum
                        ahltorp@mastodon.nuA This user is from outside of this forum
                        ahltorp@mastodon.nu
                        wrote last edited by
                        #12

                        @crankylinuxuser @mttaggart This is of course most apparent in the original Eliza, since looking at the rules for Eliza gives us an immediate and very comprehensible peek behind the curtain. The exploitation of the Cooperative Principle is so obvious there that you cannot really deny it.

                        Modern chatbots just have a better way of hiding it, especially since the apparatus behind the curtain is unfathomably huge.

                        1 Reply Last reply
                        0
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Login or register to search.
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • World
                        • Users
                        • Groups