Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. So, something that's been bugging the shit out of me?

So, something that's been bugging the shit out of me?

Scheduled Pinned Locked Moved Uncategorized
74 Posts 32 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • munin@infosec.exchangeM munin@infosec.exchange

    So, something that's been bugging the shit out of me?

    These fucking assholes who let LLMs run rampant and delete prod?

    They query the LLM for "why" it did that.

    This is delusional behavior.

    LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

    LLMs do not have the ability to have motivation. It is a machine.

    LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

    which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

    It cannot have a why;
    It cannot have a self to have motivations;
    And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

    Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

    Touch some grass and get a fucking therapist.

    grumpusnation@mastodon.socialG This user is from outside of this forum
    grumpusnation@mastodon.socialG This user is from outside of this forum
    grumpusnation@mastodon.social
    wrote last edited by
    #7

    @munin <me staring at the cat> So, why'd you bite me, bro?
    cat: …
    me: I mean, that's really bleeding there…
    cat: …
    me: Like, I gotta get a bandage and everything…?
    cat: mrp?

    1 Reply Last reply
    0
    • munin@infosec.exchangeM munin@infosec.exchange

      So, something that's been bugging the shit out of me?

      These fucking assholes who let LLMs run rampant and delete prod?

      They query the LLM for "why" it did that.

      This is delusional behavior.

      LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

      LLMs do not have the ability to have motivation. It is a machine.

      LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

      which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

      It cannot have a why;
      It cannot have a self to have motivations;
      And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

      Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

      Touch some grass and get a fucking therapist.

      beeoproblem@mastodon.gamedev.placeB This user is from outside of this forum
      beeoproblem@mastodon.gamedev.placeB This user is from outside of this forum
      beeoproblem@mastodon.gamedev.place
      wrote last edited by
      #8

      @munin I bet there are real logs to be grabbed from LLMs but they would look more like some kind of horrifically tangled node graph than anything readable by a normal human. I strongly doubt any AI company would actually allow users access to those kind of real debug logs from their models since that would be a goldmine for anyone looking to clone them.

      munin@infosec.exchangeM 1 Reply Last reply
      0
      • beeoproblem@mastodon.gamedev.placeB beeoproblem@mastodon.gamedev.place

        @munin I bet there are real logs to be grabbed from LLMs but they would look more like some kind of horrifically tangled node graph than anything readable by a normal human. I strongly doubt any AI company would actually allow users access to those kind of real debug logs from their models since that would be a goldmine for anyone looking to clone them.

        munin@infosec.exchangeM This user is from outside of this forum
        munin@infosec.exchangeM This user is from outside of this forum
        munin@infosec.exchange
        wrote last edited by
        #9

        @beeoproblem

        I'm not talking about anything from the LLM provider's side.

        I am talking about the end user here, who has failed to set up their environment properly with logging and monitoring implemented to understand what the fuck is going on in their own production environment.

        beeoproblem@mastodon.gamedev.placeB 1 Reply Last reply
        0
        • munin@infosec.exchangeM munin@infosec.exchange

          @jackryder

          It does not help that the assholes shitting this toxin into the public sphere routinely and obviously lie about its capabilities.

          J This user is from outside of this forum
          J This user is from outside of this forum
          jackryder@infosec.exchange
          wrote last edited by
          #10

          @munin And absolutely are incentivized by doing it.

          1 Reply Last reply
          0
          • munin@infosec.exchangeM munin@infosec.exchange

            @beeoproblem

            I'm not talking about anything from the LLM provider's side.

            I am talking about the end user here, who has failed to set up their environment properly with logging and monitoring implemented to understand what the fuck is going on in their own production environment.

            beeoproblem@mastodon.gamedev.placeB This user is from outside of this forum
            beeoproblem@mastodon.gamedev.placeB This user is from outside of this forum
            beeoproblem@mastodon.gamedev.place
            wrote last edited by
            #11

            @munin fair. My point was somewhat tangential. The LLM providers most likely don't want to provide useful logs to the end user in the first place because of their paranoia about being reverse-engineered. End users are behind the 8 ball from the start.

            IMO "Agentic" tools should be hurled into the sun. Not asked nicely "pweeze tell me why did you delete all the things?"

            munin@infosec.exchangeM 1 Reply Last reply
            0
            • beeoproblem@mastodon.gamedev.placeB beeoproblem@mastodon.gamedev.place

              @munin fair. My point was somewhat tangential. The LLM providers most likely don't want to provide useful logs to the end user in the first place because of their paranoia about being reverse-engineered. End users are behind the 8 ball from the start.

              IMO "Agentic" tools should be hurled into the sun. Not asked nicely "pweeze tell me why did you delete all the things?"

              munin@infosec.exchangeM This user is from outside of this forum
              munin@infosec.exchangeM This user is from outside of this forum
              munin@infosec.exchange
              wrote last edited by
              #12

              @beeoproblem

              they don't provide those because they don't have them.

              look at the 'claude' leak: it is all vibed slop.

              there are no logs. there is no human understanding of these systems. there is no intent or competence. it is all defective horseshit.

              1 Reply Last reply
              0
              • munin@infosec.exchangeM munin@infosec.exchange

                So, something that's been bugging the shit out of me?

                These fucking assholes who let LLMs run rampant and delete prod?

                They query the LLM for "why" it did that.

                This is delusional behavior.

                LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                LLMs do not have the ability to have motivation. It is a machine.

                LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                It cannot have a why;
                It cannot have a self to have motivations;
                And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                Touch some grass and get a fucking therapist.

                sinvega@mas.toS This user is from outside of this forum
                sinvega@mas.toS This user is from outside of this forum
                sinvega@mas.to
                wrote last edited by
                #13

                @munin see also: "hallucinated". fucking STOP IT. It did not hallucinate anything it is not even remotely capable of thinking or imagining or understanding anything at all, ever, and never will be. It just regurgitates shit that looks statistically similar to the words you put in

                I'm so tired of having to revise my expectations of people downwards AGAIN. I did not know a hole this deep was possible

                theeclecticdyslexic@mstdn.socialT mkj@social.mkj.earthM 2 Replies Last reply
                0
                • sinvega@mas.toS sinvega@mas.to

                  @munin see also: "hallucinated". fucking STOP IT. It did not hallucinate anything it is not even remotely capable of thinking or imagining or understanding anything at all, ever, and never will be. It just regurgitates shit that looks statistically similar to the words you put in

                  I'm so tired of having to revise my expectations of people downwards AGAIN. I did not know a hole this deep was possible

                  theeclecticdyslexic@mstdn.socialT This user is from outside of this forum
                  theeclecticdyslexic@mstdn.socialT This user is from outside of this forum
                  theeclecticdyslexic@mstdn.social
                  wrote last edited by
                  #14

                  @sinvega

                  I like the term "bullshit" here. The LLM produces a series of symbols with no connection to the concept of meaning, mearely to statistical frequency. Whether the output is accurate or not provides no insight, because there was no intent. It is effectively as close to a philosophical zombie as we have come.

                  Any other interpretation is anthropomorphism all the way down.

                  It might get things correct, but this is the same way random pictures of clocks may show the right time.

                  @munin

                  rupert@mastodon.nzR sabik@rants.auS 2 Replies Last reply
                  0
                  • theeclecticdyslexic@mstdn.socialT theeclecticdyslexic@mstdn.social

                    @sinvega

                    I like the term "bullshit" here. The LLM produces a series of symbols with no connection to the concept of meaning, mearely to statistical frequency. Whether the output is accurate or not provides no insight, because there was no intent. It is effectively as close to a philosophical zombie as we have come.

                    Any other interpretation is anthropomorphism all the way down.

                    It might get things correct, but this is the same way random pictures of clocks may show the right time.

                    @munin

                    rupert@mastodon.nzR This user is from outside of this forum
                    rupert@mastodon.nzR This user is from outside of this forum
                    rupert@mastodon.nz
                    wrote last edited by
                    #15

                    @theeclecticdyslexic @sinvega @munin I like talking *about* bullshit, especially when you tell people you mean in the sense coined by Harry Frankfurt in his 1986 paper, and they look at you like this is an example, isn't it?

                    1 Reply Last reply
                    0
                    • munin@infosec.exchangeM munin@infosec.exchange

                      So, something that's been bugging the shit out of me?

                      These fucking assholes who let LLMs run rampant and delete prod?

                      They query the LLM for "why" it did that.

                      This is delusional behavior.

                      LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                      LLMs do not have the ability to have motivation. It is a machine.

                      LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                      which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                      It cannot have a why;
                      It cannot have a self to have motivations;
                      And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                      Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                      Touch some grass and get a fucking therapist.

                      arclight@oldbytes.spaceA This user is from outside of this forum
                      arclight@oldbytes.spaceA This user is from outside of this forum
                      arclight@oldbytes.space
                      wrote last edited by
                      #16

                      @munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.

                      crowbriarhexe@tech.lgbtC pikesley@mastodon.me.ukP smartmanapps@dotnet.socialS rubinjoni@mastodon.socialR 4 Replies Last reply
                      0
                      • munin@infosec.exchangeM munin@infosec.exchange

                        So, something that's been bugging the shit out of me?

                        These fucking assholes who let LLMs run rampant and delete prod?

                        They query the LLM for "why" it did that.

                        This is delusional behavior.

                        LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                        LLMs do not have the ability to have motivation. It is a machine.

                        LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                        which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                        It cannot have a why;
                        It cannot have a self to have motivations;
                        And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                        Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                        Touch some grass and get a fucking therapist.

                        varpie@peculiar.floristV This user is from outside of this forum
                        varpie@peculiar.floristV This user is from outside of this forum
                        varpie@peculiar.florist
                        wrote last edited by
                        #17

                        @munin Well... As you mentioned, each generation "reads the prompt and any cache, if they exist, from prior session", and since they were trained on "explaining" their previous outputs to sound like a relevant discussion, asking why a model gave a specific output isn't as stupid as you make it out to be, as it can give some input on the "thinking" part of the previous output that is usually not directly visible. That can then be used to tweak prompts and add some guardrails (even though there is a fairly long list of examples of guardrails not being fully effective). Of course, the first problem is giving access to prod to an unreliable system...

                        1 Reply Last reply
                        0
                        • arclight@oldbytes.spaceA arclight@oldbytes.space

                          @munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.

                          crowbriarhexe@tech.lgbtC This user is from outside of this forum
                          crowbriarhexe@tech.lgbtC This user is from outside of this forum
                          crowbriarhexe@tech.lgbt
                          wrote last edited by
                          #18

                          @arclight @munin Don’t drag Tubey into this 😭

                          arclight@oldbytes.spaceA 1 Reply Last reply
                          0
                          • munin@infosec.exchangeM munin@infosec.exchange

                            So, something that's been bugging the shit out of me?

                            These fucking assholes who let LLMs run rampant and delete prod?

                            They query the LLM for "why" it did that.

                            This is delusional behavior.

                            LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                            LLMs do not have the ability to have motivation. It is a machine.

                            LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                            which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                            It cannot have a why;
                            It cannot have a self to have motivations;
                            And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                            Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                            Touch some grass and get a fucking therapist.

                            clarablackink@writing.exchangeC This user is from outside of this forum
                            clarablackink@writing.exchangeC This user is from outside of this forum
                            clarablackink@writing.exchange
                            wrote last edited by
                            #19

                            @munin It seems to tie in which the trend that started awhile back (2016-2018) where people would respond to other people online by telling them to "go google it".

                            It came from a place of frustration with reply guys but I think there was a cultural effect of people being told to fuck off and stop asking actual people questions.

                            Curiosity deflected is often a social wound.

                            Communities that handled dumb questions well always felt like places of refuge. Things shifted and it opened the door.

                            clarablackink@writing.exchangeC 1 Reply Last reply
                            0
                            • clarablackink@writing.exchangeC clarablackink@writing.exchange

                              @munin It seems to tie in which the trend that started awhile back (2016-2018) where people would respond to other people online by telling them to "go google it".

                              It came from a place of frustration with reply guys but I think there was a cultural effect of people being told to fuck off and stop asking actual people questions.

                              Curiosity deflected is often a social wound.

                              Communities that handled dumb questions well always felt like places of refuge. Things shifted and it opened the door.

                              clarablackink@writing.exchangeC This user is from outside of this forum
                              clarablackink@writing.exchangeC This user is from outside of this forum
                              clarablackink@writing.exchange
                              wrote last edited by
                              #20

                              @munin Also, not arguing against anything you've said.

                              Its been sad seeing different online places slowly give over the human side of the community to various resources that paved the way towards LLM dependence.

                              I've been online since the 90s and folks have always had shitty moments but the outsourcing of community knowledge to google did seem to prime folks who are more vulnerable in ways that LLMs are calibrated to cater to.

                              1 Reply Last reply
                              0
                              • arclight@oldbytes.spaceA arclight@oldbytes.space

                                @munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.

                                pikesley@mastodon.me.ukP This user is from outside of this forum
                                pikesley@mastodon.me.ukP This user is from outside of this forum
                                pikesley@mastodon.me.uk
                                wrote last edited by
                                #21

                                @arclight @munin

                                "I wrote 'I am a conscious being' on a piece of paper and put it in a photocopier. What happened next will shock you"

                                1 Reply Last reply
                                0
                                • munin@infosec.exchangeM munin@infosec.exchange

                                  So, something that's been bugging the shit out of me?

                                  These fucking assholes who let LLMs run rampant and delete prod?

                                  They query the LLM for "why" it did that.

                                  This is delusional behavior.

                                  LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                                  LLMs do not have the ability to have motivation. It is a machine.

                                  LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                                  which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                                  It cannot have a why;
                                  It cannot have a self to have motivations;
                                  And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                                  Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                                  Touch some grass and get a fucking therapist.

                                  juergen_hubert@mementomori.socialJ This user is from outside of this forum
                                  juergen_hubert@mementomori.socialJ This user is from outside of this forum
                                  juergen_hubert@mementomori.social
                                  wrote last edited by
                                  #22

                                  @munin

                                  It's a cargo cult.

                                  1 Reply Last reply
                                  0
                                  • crowbriarhexe@tech.lgbtC crowbriarhexe@tech.lgbt

                                    @arclight @munin Don’t drag Tubey into this 😭

                                    arclight@oldbytes.spaceA This user is from outside of this forum
                                    arclight@oldbytes.spaceA This user is from outside of this forum
                                    arclight@oldbytes.space
                                    wrote last edited by
                                    #23

                                    @crowbriarhexe @munin I had a sock puppet character that loved Brazilian steakhouse ("Fogo de Chão! MEAT ON SWORDS!") and was a huge proponent of self-betterment through community college. But that was me - the sock was just a vessel, a conduit.

                                    1 Reply Last reply
                                    0
                                    • R relay@relay.publicsquare.global shared this topic
                                      R relay@relay.mycrowd.ca shared this topic
                                    • theeclecticdyslexic@mstdn.socialT theeclecticdyslexic@mstdn.social

                                      @sinvega

                                      I like the term "bullshit" here. The LLM produces a series of symbols with no connection to the concept of meaning, mearely to statistical frequency. Whether the output is accurate or not provides no insight, because there was no intent. It is effectively as close to a philosophical zombie as we have come.

                                      Any other interpretation is anthropomorphism all the way down.

                                      It might get things correct, but this is the same way random pictures of clocks may show the right time.

                                      @munin

                                      sabik@rants.auS This user is from outside of this forum
                                      sabik@rants.auS This user is from outside of this forum
                                      sabik@rants.au
                                      wrote last edited by
                                      #24

                                      @theeclecticdyslexic @sinvega @munin
                                      Another candidate is "confabulation"

                                      1 Reply Last reply
                                      0
                                      • munin@infosec.exchangeM munin@infosec.exchange

                                        So, something that's been bugging the shit out of me?

                                        These fucking assholes who let LLMs run rampant and delete prod?

                                        They query the LLM for "why" it did that.

                                        This is delusional behavior.

                                        LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                                        LLMs do not have the ability to have motivation. It is a machine.

                                        LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                                        which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                                        It cannot have a why;
                                        It cannot have a self to have motivations;
                                        And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                                        Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                                        Touch some grass and get a fucking therapist.

                                        technomagik@99finches.comT This user is from outside of this forum
                                        technomagik@99finches.comT This user is from outside of this forum
                                        technomagik@99finches.com
                                        wrote last edited by
                                        #25
                                        @munin If I do s/LLM/CEO/ (vimspeak for replace LLM with CEO), and the CEO has a Pavlovian programmed response to ignore their own humanity and empathy in service of personal profit, is the result really any different?

                                        There are days I think we'd all be better off with AI CEOs and union memberships, because as the union steward I could at least examine the model weight activation of why the AI did what it did.

                                        In human CEOs, I can only infer what that motivation might have been, where in an open model with transparent training data, I can mathematically determine what caused a particular token to be generated.

                                        Of course, in the current "state of the art" with proprietary models and training data sets, only the nation-state funded hackers really have the means and the motive to go inspecting why an LLM does what it does. Maybe that's not that much different than what the conspiracy theorists have been saying about the ruling class.
                                        1 Reply Last reply
                                        0
                                        • munin@infosec.exchangeM munin@infosec.exchange

                                          So, something that's been bugging the shit out of me?

                                          These fucking assholes who let LLMs run rampant and delete prod?

                                          They query the LLM for "why" it did that.

                                          This is delusional behavior.

                                          LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                                          LLMs do not have the ability to have motivation. It is a machine.

                                          LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                                          which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                                          It cannot have a why;
                                          It cannot have a self to have motivations;
                                          And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                                          Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                                          Touch some grass and get a fucking therapist.

                                          ysegrim@furry.engineerY This user is from outside of this forum
                                          ysegrim@furry.engineerY This user is from outside of this forum
                                          ysegrim@furry.engineer
                                          wrote last edited by
                                          #26

                                          @munin tbh, these already were my thoughts when people started reporting they had "tricked" a chatbot into "spilling its instructions".
                                          No. You successfully prompted the LLM into generating something that looks like instructions. They may be related to the actual system prompts and/or fine tuning data. But there's absolutely no guarantee.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups