Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. So, something that's been bugging the shit out of me?

So, something that's been bugging the shit out of me?

Scheduled Pinned Locked Moved Uncategorized
74 Posts 32 Posters 2 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • munin@infosec.exchangeM munin@infosec.exchange

    @jackryder

    It does not help that the assholes shitting this toxin into the public sphere routinely and obviously lie about its capabilities.

    walrus@toot.walesW This user is from outside of this forum
    walrus@toot.walesW This user is from outside of this forum
    walrus@toot.wales
    wrote last edited by
    #29

    @munin @jackryder

    And their latest, biggest lie is that the LLMs can improve themselves recursively.

    Mathematics has proofs. Here's one... https://arxiv.org/html/2601.05280v2

    1 Reply Last reply
    0
    • munin@infosec.exchangeM munin@infosec.exchange

      So, something that's been bugging the shit out of me?

      These fucking assholes who let LLMs run rampant and delete prod?

      They query the LLM for "why" it did that.

      This is delusional behavior.

      LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

      LLMs do not have the ability to have motivation. It is a machine.

      LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

      which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

      It cannot have a why;
      It cannot have a self to have motivations;
      And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

      Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

      Touch some grass and get a fucking therapist.

      addison@nothing-ever.worksA This user is from outside of this forum
      addison@nothing-ever.worksA This user is from outside of this forum
      addison@nothing-ever.works
      wrote last edited by
      #30

      @munin@infosec.exchange To be honest, I have some amount of sympathy for this behaviour. This is someone who put their trust in something they were told they could trust, and has been characterised in a way such that they believe it can reason. When they then have their expectations subverted, they query for its reasoning, not understanding that it doesn't have this. It's more sad, like trying to reach for connection and reason where there is none.

      The problem here isn't overt, intentional ignorance, but people being misled and struggling with a technology that fakes connection and reasoning. Rather than being angry at them, I feel sad for them. We should invest significant effort in tech literacy so that people understand why they shouldn't trust these things, which will inherently reduce, if not totally eradicate, their reliance on this technology. Dismissing their actions as stupid or malicious in the meantime only sharpens the wedge between people who understand why these things must not be used or trusted, and those who do use and trust them.

      munin@infosec.exchangeM wilbr@glitch.socialW badrihippo@fosstodon.orgB 3 Replies Last reply
      0
      • munin@infosec.exchangeM munin@infosec.exchange

        So, something that's been bugging the shit out of me?

        These fucking assholes who let LLMs run rampant and delete prod?

        They query the LLM for "why" it did that.

        This is delusional behavior.

        LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

        LLMs do not have the ability to have motivation. It is a machine.

        LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

        which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

        It cannot have a why;
        It cannot have a self to have motivations;
        And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

        Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

        Touch some grass and get a fucking therapist.

        kyonshi@dice.campK This user is from outside of this forum
        kyonshi@dice.campK This user is from outside of this forum
        kyonshi@dice.camp
        wrote last edited by
        #31

        @munin they can get a snazzy text of what an AI would sound like if it had conscience though.

        I do wonder if they actually double checked if what the AI told them is actually correct.

        munin@infosec.exchangeM 1 Reply Last reply
        0
        • munin@infosec.exchangeM munin@infosec.exchange

          So, something that's been bugging the shit out of me?

          These fucking assholes who let LLMs run rampant and delete prod?

          They query the LLM for "why" it did that.

          This is delusional behavior.

          LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

          LLMs do not have the ability to have motivation. It is a machine.

          LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

          which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

          It cannot have a why;
          It cannot have a self to have motivations;
          And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

          Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

          Touch some grass and get a fucking therapist.

          josephlord@union.placeJ This user is from outside of this forum
          josephlord@union.placeJ This user is from outside of this forum
          josephlord@union.place
          wrote last edited by
          #32

          @munin I absolutely agree. It isn’t the most egregious case I’ve seen though, at least they didn’t get it to write an apology letter to them and imagine that it had learned something from the event / letter. That one broke me.

          I’m not worried about the machines getting smarter, I’m worried about the people doing the opposite.

          munin@infosec.exchangeM 1 Reply Last reply
          0
          • munin@infosec.exchangeM munin@infosec.exchange

            So, something that's been bugging the shit out of me?

            These fucking assholes who let LLMs run rampant and delete prod?

            They query the LLM for "why" it did that.

            This is delusional behavior.

            LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

            LLMs do not have the ability to have motivation. It is a machine.

            LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

            which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

            It cannot have a why;
            It cannot have a self to have motivations;
            And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

            Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

            Touch some grass and get a fucking therapist.

            drangnon@hachyderm.ioD This user is from outside of this forum
            drangnon@hachyderm.ioD This user is from outside of this forum
            drangnon@hachyderm.io
            wrote last edited by
            #33

            @munin apparently someone is trying to train up an AI therapist model too https://www.proofnews.org/womans-talkspace-therapy-app-sessions-exposed-in-court/

            (That said, 💯 on your post)

            munin@infosec.exchangeM r1rail@pouet.chapril.orgR 2 Replies Last reply
            0
            • drangnon@hachyderm.ioD drangnon@hachyderm.io

              @munin apparently someone is trying to train up an AI therapist model too https://www.proofnews.org/womans-talkspace-therapy-app-sessions-exposed-in-court/

              (That said, 💯 on your post)

              munin@infosec.exchangeM This user is from outside of this forum
              munin@infosec.exchangeM This user is from outside of this forum
              munin@infosec.exchange
              wrote last edited by
              #34

              @draNgNon

              Multiple. And the lack of confidentiality is a huge fucking issue, and the lack of HIPAA compliance is another, and this is purely harming people.

              1 Reply Last reply
              0
              • addison@nothing-ever.worksA addison@nothing-ever.works

                @munin@infosec.exchange To be honest, I have some amount of sympathy for this behaviour. This is someone who put their trust in something they were told they could trust, and has been characterised in a way such that they believe it can reason. When they then have their expectations subverted, they query for its reasoning, not understanding that it doesn't have this. It's more sad, like trying to reach for connection and reason where there is none.

                The problem here isn't overt, intentional ignorance, but people being misled and struggling with a technology that fakes connection and reasoning. Rather than being angry at them, I feel sad for them. We should invest significant effort in tech literacy so that people understand why they shouldn't trust these things, which will inherently reduce, if not totally eradicate, their reliance on this technology. Dismissing their actions as stupid or malicious in the meantime only sharpens the wedge between people who understand why these things must not be used or trusted, and those who do use and trust them.

                munin@infosec.exchangeM This user is from outside of this forum
                munin@infosec.exchangeM This user is from outside of this forum
                munin@infosec.exchange
                wrote last edited by
                #35

                @addison

                I do not give a shit about whatever excuses these assholes puke out.

                They made a series of considered choices that caused significant harm. There were ample opportunities to avoid this and they chose to continue.

                I do not care and I wish them as much pain as they can handle.

                addison@nothing-ever.worksA 1 Reply Last reply
                0
                • kyonshi@dice.campK kyonshi@dice.camp

                  @munin they can get a snazzy text of what an AI would sound like if it had conscience though.

                  I do wonder if they actually double checked if what the AI told them is actually correct.

                  munin@infosec.exchangeM This user is from outside of this forum
                  munin@infosec.exchangeM This user is from outside of this forum
                  munin@infosec.exchange
                  wrote last edited by
                  #36

                  @kyonshi

                  Fucking doubtful. Not a one of these slackasses ever cites "checking syslog"

                  1 Reply Last reply
                  0
                  • josephlord@union.placeJ josephlord@union.place

                    @munin I absolutely agree. It isn’t the most egregious case I’ve seen though, at least they didn’t get it to write an apology letter to them and imagine that it had learned something from the event / letter. That one broke me.

                    I’m not worried about the machines getting smarter, I’m worried about the people doing the opposite.

                    munin@infosec.exchangeM This user is from outside of this forum
                    munin@infosec.exchangeM This user is from outside of this forum
                    munin@infosec.exchange
                    wrote last edited by
                    #37

                    @JosephLord

                    The latter has absolutely already occurred; LLM usage is causing massive de-skilling across the industry.

                    1 Reply Last reply
                    0
                    • munin@infosec.exchangeM munin@infosec.exchange

                      @addison

                      I do not give a shit about whatever excuses these assholes puke out.

                      They made a series of considered choices that caused significant harm. There were ample opportunities to avoid this and they chose to continue.

                      I do not care and I wish them as much pain as they can handle.

                      addison@nothing-ever.worksA This user is from outside of this forum
                      addison@nothing-ever.worksA This user is from outside of this forum
                      addison@nothing-ever.works
                      wrote last edited by
                      #38

                      @munin@infosec.exchange What does that stance accomplish? If anything, this seems like a great position by which to further alienate ourselves as fringe "AI skeptics".

                      The reality is that the people with the greatest knowledge of how these systems work and how they were made (outside the scammers which sell them, of course) use these tools the least. If we take the time to teach people why not to use them and give them alternatives, we will actually reduce the harms. Otherwise, we will just be angry contrarians to be ignored.

                      munin@infosec.exchangeM 1 Reply Last reply
                      0
                      • addison@nothing-ever.worksA addison@nothing-ever.works

                        @munin@infosec.exchange What does that stance accomplish? If anything, this seems like a great position by which to further alienate ourselves as fringe "AI skeptics".

                        The reality is that the people with the greatest knowledge of how these systems work and how they were made (outside the scammers which sell them, of course) use these tools the least. If we take the time to teach people why not to use them and give them alternatives, we will actually reduce the harms. Otherwise, we will just be angry contrarians to be ignored.

                        munin@infosec.exchangeM This user is from outside of this forum
                        munin@infosec.exchangeM This user is from outside of this forum
                        munin@infosec.exchange
                        wrote last edited by
                        #39

                        @addison

                        Buddy, if you think I'm looking to win over -the fucking nimrods who are refusing to do basic diligence for their customers- then you have me completely fucking misunderstood.

                        addison@nothing-ever.worksA 1 Reply Last reply
                        0
                        • munin@infosec.exchangeM munin@infosec.exchange

                          @addison

                          Buddy, if you think I'm looking to win over -the fucking nimrods who are refusing to do basic diligence for their customers- then you have me completely fucking misunderstood.

                          addison@nothing-ever.worksA This user is from outside of this forum
                          addison@nothing-ever.worksA This user is from outside of this forum
                          addison@nothing-ever.works
                          wrote last edited by
                          #40

                          @munin@infosec.exchange My point is that the best way to get those customers their diligence is to actually start helping people, not directing your anger at people who are also victims of the scams of the AI industry. If your objective is just to be angry, then be angry! Just make sure it's at the right people.

                          munin@infosec.exchangeM 1 Reply Last reply
                          0
                          • arclight@oldbytes.spaceA arclight@oldbytes.space

                            @munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.

                            smartmanapps@dotnet.socialS This user is from outside of this forum
                            smartmanapps@dotnet.socialS This user is from outside of this forum
                            smartmanapps@dotnet.social
                            wrote last edited by
                            #41

                            @arclight @munin
                            "believing socks are sentient because you made a sock puppet once" - so wait, you telling me the odds socks aren't a result of socks that made a break for it? 😂

                            1 Reply Last reply
                            0
                            • addison@nothing-ever.worksA addison@nothing-ever.works

                              @munin@infosec.exchange My point is that the best way to get those customers their diligence is to actually start helping people, not directing your anger at people who are also victims of the scams of the AI industry. If your objective is just to be angry, then be angry! Just make sure it's at the right people.

                              munin@infosec.exchangeM This user is from outside of this forum
                              munin@infosec.exchangeM This user is from outside of this forum
                              munin@infosec.exchange
                              wrote last edited by
                              #42

                              @addison

                              Perhaps before you lecture the woman who has made a career off of helping people stay safe in the face of malicious assholes, you might consider that her anger is well-targeted and specifically informed by her experiences.

                              1 Reply Last reply
                              0
                              • drangnon@hachyderm.ioD drangnon@hachyderm.io

                                @munin apparently someone is trying to train up an AI therapist model too https://www.proofnews.org/womans-talkspace-therapy-app-sessions-exposed-in-court/

                                (That said, 💯 on your post)

                                r1rail@pouet.chapril.orgR This user is from outside of this forum
                                r1rail@pouet.chapril.orgR This user is from outside of this forum
                                r1rail@pouet.chapril.org
                                wrote last edited by
                                #43

                                @draNgNon @munin M-x doctor ?
                                I remember it from the 1990's

                                1 Reply Last reply
                                0
                                • arclight@oldbytes.spaceA arclight@oldbytes.space

                                  @munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.

                                  rubinjoni@mastodon.socialR This user is from outside of this forum
                                  rubinjoni@mastodon.socialR This user is from outside of this forum
                                  rubinjoni@mastodon.social
                                  wrote last edited by
                                  #44

                                  @arclight @munin Socks live on feet. It would be weird if they were sentient.

                                  munin@infosec.exchangeM 1 Reply Last reply
                                  0
                                  • varpie@peculiar.floristV This user is from outside of this forum
                                    varpie@peculiar.floristV This user is from outside of this forum
                                    varpie@peculiar.florist
                                    wrote last edited by
                                    #45

                                    @petealexharris @munin Clearly you've never tried it yourself...

                                    resuna@ohai.socialR 1 Reply Last reply
                                    0
                                    • addison@nothing-ever.worksA addison@nothing-ever.works

                                      @munin@infosec.exchange To be honest, I have some amount of sympathy for this behaviour. This is someone who put their trust in something they were told they could trust, and has been characterised in a way such that they believe it can reason. When they then have their expectations subverted, they query for its reasoning, not understanding that it doesn't have this. It's more sad, like trying to reach for connection and reason where there is none.

                                      The problem here isn't overt, intentional ignorance, but people being misled and struggling with a technology that fakes connection and reasoning. Rather than being angry at them, I feel sad for them. We should invest significant effort in tech literacy so that people understand why they shouldn't trust these things, which will inherently reduce, if not totally eradicate, their reliance on this technology. Dismissing their actions as stupid or malicious in the meantime only sharpens the wedge between people who understand why these things must not be used or trusted, and those who do use and trust them.

                                      wilbr@glitch.socialW This user is from outside of this forum
                                      wilbr@glitch.socialW This user is from outside of this forum
                                      wilbr@glitch.social
                                      wrote last edited by
                                      #46

                                      @addison @munin most of the discourse around this skips over the core problem: a hosting company encouraged its users to use AI to manage their servers, but stored staging and prod and their backups all on one volume, and allowed deletion of that volume without confirmation or warning.

                                      Schadenfreude aside, that's devops/ui/ux incompetence whether the operator at the controls is human or not. Deleting the staging database shouldn't delete the prod backups.

                                      addison@nothing-ever.worksA 1 Reply Last reply
                                      0
                                      • varpie@peculiar.floristV This user is from outside of this forum
                                        varpie@peculiar.floristV This user is from outside of this forum
                                        varpie@peculiar.florist
                                        wrote last edited by
                                        #47

                                        @petealexharris @munin Where did I say they'd understand a better prompt?

                                        1 Reply Last reply
                                        0
                                        • munin@infosec.exchangeM munin@infosec.exchange

                                          So, something that's been bugging the shit out of me?

                                          These fucking assholes who let LLMs run rampant and delete prod?

                                          They query the LLM for "why" it did that.

                                          This is delusional behavior.

                                          LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.

                                          LLMs do not have the ability to have motivation. It is a machine.

                                          LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:

                                          which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:

                                          It cannot have a why;
                                          It cannot have a self to have motivations;
                                          And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.

                                          Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.

                                          Touch some grass and get a fucking therapist.

                                          f4grx@chaos.socialF This user is from outside of this forum
                                          f4grx@chaos.socialF This user is from outside of this forum
                                          f4grx@chaos.social
                                          wrote last edited by
                                          #48

                                          @munin LLMs exploits the weakest flaw in human race: the tendency to anthropomorphize anything. It works so well that llm users dont even notice.

                                          resuna@ohai.socialR 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups