Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. So, something that's been bugging the shit out of me?

So, something that's been bugging the shit out of me?

Scheduled Pinned Locked Moved Uncategorized
74 Posts 32 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • arclight@oldbytes.spaceA arclight@oldbytes.space

    @munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.

    crowbriarhexe@tech.lgbtC This user is from outside of this forum
    crowbriarhexe@tech.lgbtC This user is from outside of this forum
    crowbriarhexe@tech.lgbt
    wrote last edited by
    #18

    @arclight @munin Don’t drag Tubey into this 😭

    arclight@oldbytes.spaceA 1 Reply Last reply
    0
    • munin@infosec.exchangeM munin@infosec.exchange

      So, something that's been bugging the shit out of me?

      These fucking assholes who let LLMs run rampant and delete prod?

      They query the LLM for "why" it did that.

      This is delusional behavior.

      LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

      LLMs do not have the ability to have motivation. It is a machine.

      LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

      which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

      It cannot have a why;
      It cannot have a self to have motivations;
      And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

      Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

      Touch some grass and get a fucking therapist.

      clarablackink@writing.exchangeC This user is from outside of this forum
      clarablackink@writing.exchangeC This user is from outside of this forum
      clarablackink@writing.exchange
      wrote last edited by
      #19

      @munin It seems to tie in which the trend that started awhile back (2016-2018) where people would respond to other people online by telling them to "go google it".

      It came from a place of frustration with reply guys but I think there was a cultural effect of people being told to fuck off and stop asking actual people questions.

      Curiosity deflected is often a social wound.

      Communities that handled dumb questions well always felt like places of refuge. Things shifted and it opened the door.

      clarablackink@writing.exchangeC 1 Reply Last reply
      0
      • clarablackink@writing.exchangeC clarablackink@writing.exchange

        @munin It seems to tie in which the trend that started awhile back (2016-2018) where people would respond to other people online by telling them to "go google it".

        It came from a place of frustration with reply guys but I think there was a cultural effect of people being told to fuck off and stop asking actual people questions.

        Curiosity deflected is often a social wound.

        Communities that handled dumb questions well always felt like places of refuge. Things shifted and it opened the door.

        clarablackink@writing.exchangeC This user is from outside of this forum
        clarablackink@writing.exchangeC This user is from outside of this forum
        clarablackink@writing.exchange
        wrote last edited by
        #20

        @munin Also, not arguing against anything you've said.

        Its been sad seeing different online places slowly give over the human side of the community to various resources that paved the way towards LLM dependence.

        I've been online since the 90s and folks have always had shitty moments but the outsourcing of community knowledge to google did seem to prime folks who are more vulnerable in ways that LLMs are calibrated to cater to.

        1 Reply Last reply
        0
        • arclight@oldbytes.spaceA arclight@oldbytes.space

          @munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.

          pikesley@mastodon.me.ukP This user is from outside of this forum
          pikesley@mastodon.me.ukP This user is from outside of this forum
          pikesley@mastodon.me.uk
          wrote last edited by
          #21

          @arclight @munin

          "I wrote 'I am a conscious being' on a piece of paper and put it in a photocopier. What happened next will shock you"

          1 Reply Last reply
          0
          • munin@infosec.exchangeM munin@infosec.exchange

            So, something that's been bugging the shit out of me?

            These fucking assholes who let LLMs run rampant and delete prod?

            They query the LLM for "why" it did that.

            This is delusional behavior.

            LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

            LLMs do not have the ability to have motivation. It is a machine.

            LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

            which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

            It cannot have a why;
            It cannot have a self to have motivations;
            And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

            Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

            Touch some grass and get a fucking therapist.

            juergen_hubert@mementomori.socialJ This user is from outside of this forum
            juergen_hubert@mementomori.socialJ This user is from outside of this forum
            juergen_hubert@mementomori.social
            wrote last edited by
            #22

            @munin

            It's a cargo cult.

            1 Reply Last reply
            0
            • crowbriarhexe@tech.lgbtC crowbriarhexe@tech.lgbt

              @arclight @munin Don’t drag Tubey into this 😭

              arclight@oldbytes.spaceA This user is from outside of this forum
              arclight@oldbytes.spaceA This user is from outside of this forum
              arclight@oldbytes.space
              wrote last edited by
              #23

              @crowbriarhexe @munin I had a sock puppet character that loved Brazilian steakhouse ("Fogo de Chão! MEAT ON SWORDS!") and was a huge proponent of self-betterment through community college. But that was me - the sock was just a vessel, a conduit.

              1 Reply Last reply
              0
              • R relay@relay.publicsquare.global shared this topic
                R relay@relay.mycrowd.ca shared this topic
              • theeclecticdyslexic@mstdn.socialT theeclecticdyslexic@mstdn.social

                @sinvega

                I like the term "bullshit" here. The LLM produces a series of symbols with no connection to the concept of meaning, mearely to statistical frequency. Whether the output is accurate or not provides no insight, because there was no intent. It is effectively as close to a philosophical zombie as we have come.

                Any other interpretation is anthropomorphism all the way down.

                It might get things correct, but this is the same way random pictures of clocks may show the right time.

                @munin

                sabik@rants.auS This user is from outside of this forum
                sabik@rants.auS This user is from outside of this forum
                sabik@rants.au
                wrote last edited by
                #24

                @theeclecticdyslexic @sinvega @munin
                Another candidate is "confabulation"

                1 Reply Last reply
                0
                • munin@infosec.exchangeM munin@infosec.exchange

                  So, something that's been bugging the shit out of me?

                  These fucking assholes who let LLMs run rampant and delete prod?

                  They query the LLM for "why" it did that.

                  This is delusional behavior.

                  LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                  LLMs do not have the ability to have motivation. It is a machine.

                  LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                  which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                  It cannot have a why;
                  It cannot have a self to have motivations;
                  And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                  Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                  Touch some grass and get a fucking therapist.

                  technomagik@99finches.comT This user is from outside of this forum
                  technomagik@99finches.comT This user is from outside of this forum
                  technomagik@99finches.com
                  wrote last edited by
                  #25
                  @munin If I do s/LLM/CEO/ (vimspeak for replace LLM with CEO), and the CEO has a Pavlovian programmed response to ignore their own humanity and empathy in service of personal profit, is the result really any different?

                  There are days I think we'd all be better off with AI CEOs and union memberships, because as the union steward I could at least examine the model weight activation of why the AI did what it did.

                  In human CEOs, I can only infer what that motivation might have been, where in an open model with transparent training data, I can mathematically determine what caused a particular token to be generated.

                  Of course, in the current "state of the art" with proprietary models and training data sets, only the nation-state funded hackers really have the means and the motive to go inspecting why an LLM does what it does. Maybe that's not that much different than what the conspiracy theorists have been saying about the ruling class.
                  1 Reply Last reply
                  0
                  • munin@infosec.exchangeM munin@infosec.exchange

                    So, something that's been bugging the shit out of me?

                    These fucking assholes who let LLMs run rampant and delete prod?

                    They query the LLM for "why" it did that.

                    This is delusional behavior.

                    LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                    LLMs do not have the ability to have motivation. It is a machine.

                    LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                    which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                    It cannot have a why;
                    It cannot have a self to have motivations;
                    And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                    Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                    Touch some grass and get a fucking therapist.

                    ysegrim@furry.engineerY This user is from outside of this forum
                    ysegrim@furry.engineerY This user is from outside of this forum
                    ysegrim@furry.engineer
                    wrote last edited by
                    #26

                    @munin tbh, these already were my thoughts when people started reporting they had "tricked" a chatbot into "spilling its instructions".
                    No. You successfully prompted the LLM into generating something that looks like instructions. They may be related to the actual system prompts and/or fine tuning data. But there's absolutely no guarantee.

                    1 Reply Last reply
                    0
                    • munin@infosec.exchangeM munin@infosec.exchange

                      Also, putting as fine a fucking point on it as possible:

                      if you do not have the fucking logs to figure out what the LLM "did"

                      then you are incompetent and are not suited to build nor run the thing which you are trying to do.

                      Get good, asshole. Take the time to learn the fucking skills.

                      fayedrake@furry.engineerF This user is from outside of this forum
                      fayedrake@furry.engineerF This user is from outside of this forum
                      fayedrake@furry.engineer
                      wrote last edited by
                      #27

                      @munin this is part of what makes me feel generally uncomfortable around the LLM crowd.

                      You look up a course on creating “advanced” “AI” systems and its equivalent to the medieval Sargent with the tired expression trying to explain to the Young Lord that Yes The Troops Need to Eat No They Can’t Forage Yes Supply Chains are Vital No They Can’t Eat Their Horses.

                      1 Reply Last reply
                      0
                      • sinvega@mas.toS sinvega@mas.to

                        @munin see also: "hallucinated". fucking STOP IT. It did not hallucinate anything it is not even remotely capable of thinking or imagining or understanding anything at all, ever, and never will be. It just regurgitates shit that looks statistically similar to the words you put in

                        I'm so tired of having to revise my expectations of people downwards AGAIN. I did not know a hole this deep was possible

                        mkj@social.mkj.earthM This user is from outside of this forum
                        mkj@social.mkj.earthM This user is from outside of this forum
                        mkj@social.mkj.earth
                        wrote last edited by
                        #28

                        @sinvega An alternative take is that the output of generative AI is *always* a "hallucination" *because by the widely used genAI-scope definition of that word, that's exactly what the software producing the output is designed to do*.

                        Whether the output happens to be correct or incorrect by some criteria is certainly not irrelevant when judging what *was* emitted, but that's a separate issue from *how* that output was generated.

                        And no this is not in support of genAI.

                        @munin

                        1 Reply Last reply
                        0
                        • munin@infosec.exchangeM munin@infosec.exchange

                          @jackryder

                          It does not help that the assholes shitting this toxin into the public sphere routinely and obviously lie about its capabilities.

                          walrus@toot.walesW This user is from outside of this forum
                          walrus@toot.walesW This user is from outside of this forum
                          walrus@toot.wales
                          wrote last edited by
                          #29

                          @munin @jackryder

                          And their latest, biggest lie is that the LLMs can improve themselves recursively.

                          Mathematics has proofs. Here's one... https://arxiv.org/html/2601.05280v2

                          1 Reply Last reply
                          0
                          • munin@infosec.exchangeM munin@infosec.exchange

                            So, something that's been bugging the shit out of me?

                            These fucking assholes who let LLMs run rampant and delete prod?

                            They query the LLM for "why" it did that.

                            This is delusional behavior.

                            LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                            LLMs do not have the ability to have motivation. It is a machine.

                            LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                            which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                            It cannot have a why;
                            It cannot have a self to have motivations;
                            And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                            Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                            Touch some grass and get a fucking therapist.

                            addison@nothing-ever.worksA This user is from outside of this forum
                            addison@nothing-ever.worksA This user is from outside of this forum
                            addison@nothing-ever.works
                            wrote last edited by
                            #30

                            @munin@infosec.exchange To be honest, I have some amount of sympathy for this behaviour. This is someone who put their trust in something they were told they could trust, and has been characterised in a way such that they believe it can reason. When they then have their expectations subverted, they query for its reasoning, not understanding that it doesn't have this. It's more sad, like trying to reach for connection and reason where there is none.

                            The problem here isn't overt, intentional ignorance, but people being misled and struggling with a technology that fakes connection and reasoning. Rather than being angry at them, I feel sad for them. We should invest significant effort in tech literacy so that people understand why they shouldn't trust these things, which will inherently reduce, if not totally eradicate, their reliance on this technology. Dismissing their actions as stupid or malicious in the meantime only sharpens the wedge between people who understand why these things must not be used or trusted, and those who do use and trust them.

                            munin@infosec.exchangeM wilbr@glitch.socialW badrihippo@fosstodon.orgB 3 Replies Last reply
                            0
                            • munin@infosec.exchangeM munin@infosec.exchange

                              So, something that's been bugging the shit out of me?

                              These fucking assholes who let LLMs run rampant and delete prod?

                              They query the LLM for "why" it did that.

                              This is delusional behavior.

                              LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                              LLMs do not have the ability to have motivation. It is a machine.

                              LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                              which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                              It cannot have a why;
                              It cannot have a self to have motivations;
                              And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                              Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                              Touch some grass and get a fucking therapist.

                              kyonshi@dice.campK This user is from outside of this forum
                              kyonshi@dice.campK This user is from outside of this forum
                              kyonshi@dice.camp
                              wrote last edited by
                              #31

                              @munin they can get a snazzy text of what an AI would sound like if it had conscience though.

                              I do wonder if they actually double checked if what the AI told them is actually correct.

                              munin@infosec.exchangeM 1 Reply Last reply
                              0
                              • munin@infosec.exchangeM munin@infosec.exchange

                                So, something that's been bugging the shit out of me?

                                These fucking assholes who let LLMs run rampant and delete prod?

                                They query the LLM for "why" it did that.

                                This is delusional behavior.

                                LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                                LLMs do not have the ability to have motivation. It is a machine.

                                LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                                which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                                It cannot have a why;
                                It cannot have a self to have motivations;
                                And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                                Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                                Touch some grass and get a fucking therapist.

                                josephlord@union.placeJ This user is from outside of this forum
                                josephlord@union.placeJ This user is from outside of this forum
                                josephlord@union.place
                                wrote last edited by
                                #32

                                @munin I absolutely agree. It isn’t the most egregious case I’ve seen though, at least they didn’t get it to write an apology letter to them and imagine that it had learned something from the event / letter. That one broke me.

                                I’m not worried about the machines getting smarter, I’m worried about the people doing the opposite.

                                munin@infosec.exchangeM 1 Reply Last reply
                                0
                                • munin@infosec.exchangeM munin@infosec.exchange

                                  So, something that's been bugging the shit out of me?

                                  These fucking assholes who let LLMs run rampant and delete prod?

                                  They query the LLM for "why" it did that.

                                  This is delusional behavior.

                                  LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                                  LLMs do not have the ability to have motivation. It is a machine.

                                  LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                                  which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                                  It cannot have a why;
                                  It cannot have a self to have motivations;
                                  And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                                  Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                                  Touch some grass and get a fucking therapist.

                                  drangnon@hachyderm.ioD This user is from outside of this forum
                                  drangnon@hachyderm.ioD This user is from outside of this forum
                                  drangnon@hachyderm.io
                                  wrote last edited by
                                  #33

                                  @munin apparently someone is trying to train up an AI therapist model too https://www.proofnews.org/womans-talkspace-therapy-app-sessions-exposed-in-court/

                                  (That said, 💯 on your post)

                                  munin@infosec.exchangeM r1rail@pouet.chapril.orgR 2 Replies Last reply
                                  0
                                  • drangnon@hachyderm.ioD drangnon@hachyderm.io

                                    @munin apparently someone is trying to train up an AI therapist model too https://www.proofnews.org/womans-talkspace-therapy-app-sessions-exposed-in-court/

                                    (That said, 💯 on your post)

                                    munin@infosec.exchangeM This user is from outside of this forum
                                    munin@infosec.exchangeM This user is from outside of this forum
                                    munin@infosec.exchange
                                    wrote last edited by
                                    #34

                                    @draNgNon

                                    Multiple. And the lack of confidentiality is a huge fucking issue, and the lack of HIPAA compliance is another, and this is purely harming people.

                                    1 Reply Last reply
                                    0
                                    • addison@nothing-ever.worksA addison@nothing-ever.works

                                      @munin@infosec.exchange To be honest, I have some amount of sympathy for this behaviour. This is someone who put their trust in something they were told they could trust, and has been characterised in a way such that they believe it can reason. When they then have their expectations subverted, they query for its reasoning, not understanding that it doesn't have this. It's more sad, like trying to reach for connection and reason where there is none.

                                      The problem here isn't overt, intentional ignorance, but people being misled and struggling with a technology that fakes connection and reasoning. Rather than being angry at them, I feel sad for them. We should invest significant effort in tech literacy so that people understand why they shouldn't trust these things, which will inherently reduce, if not totally eradicate, their reliance on this technology. Dismissing their actions as stupid or malicious in the meantime only sharpens the wedge between people who understand why these things must not be used or trusted, and those who do use and trust them.

                                      munin@infosec.exchangeM This user is from outside of this forum
                                      munin@infosec.exchangeM This user is from outside of this forum
                                      munin@infosec.exchange
                                      wrote last edited by
                                      #35

                                      @addison

                                      I do not give a shit about whatever excuses these assholes puke out.

                                      They made a series of considered choices that caused significant harm. There were ample opportunities to avoid this and they chose to continue.

                                      I do not care and I wish them as much pain as they can handle.

                                      addison@nothing-ever.worksA 1 Reply Last reply
                                      0
                                      • kyonshi@dice.campK kyonshi@dice.camp

                                        @munin they can get a snazzy text of what an AI would sound like if it had conscience though.

                                        I do wonder if they actually double checked if what the AI told them is actually correct.

                                        munin@infosec.exchangeM This user is from outside of this forum
                                        munin@infosec.exchangeM This user is from outside of this forum
                                        munin@infosec.exchange
                                        wrote last edited by
                                        #36

                                        @kyonshi

                                        Fucking doubtful. Not a one of these slackasses ever cites "checking syslog"

                                        1 Reply Last reply
                                        0
                                        • josephlord@union.placeJ josephlord@union.place

                                          @munin I absolutely agree. It isn’t the most egregious case I’ve seen though, at least they didn’t get it to write an apology letter to them and imagine that it had learned something from the event / letter. That one broke me.

                                          I’m not worried about the machines getting smarter, I’m worried about the people doing the opposite.

                                          munin@infosec.exchangeM This user is from outside of this forum
                                          munin@infosec.exchangeM This user is from outside of this forum
                                          munin@infosec.exchange
                                          wrote last edited by
                                          #37

                                          @JosephLord

                                          The latter has absolutely already occurred; LLM usage is causing massive de-skilling across the industry.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups