Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

Scheduled Pinned Locked Moved Uncategorized
51 Posts 35 Posters 49 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

    Thank you to everyone saying "it's still the human."

    No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.

    This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.

    jztusk@mastodon.socialJ This user is from outside of this forum
    jztusk@mastodon.socialJ This user is from outside of this forum
    jztusk@mastodon.social
    wrote last edited by
    #38

    @neurovagrant

    I'd say it's still a human. But it's not the user, it's the product deployer.

    In my worldview, responsibility always, and only, lands on humans

    1 Reply Last reply
    0
    • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

      Thank you to everyone saying "it's still the human."

      No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.

      This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.

      tindrasgrove@infosec.exchangeT This user is from outside of this forum
      tindrasgrove@infosec.exchangeT This user is from outside of this forum
      tindrasgrove@infosec.exchange
      wrote last edited by
      #39

      @neurovagrant one of these days I need to sit down and write a blog post about how I have a blade that is cheap as hell, but more safe than any other blade I’ve owned, and how that relates to… everything.

      1 Reply Last reply
      0
      • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

        Thank you to everyone saying "it's still the human."

        No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.

        This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.

        aeoncypher@lgbtqia.spaceA This user is from outside of this forum
        aeoncypher@lgbtqia.spaceA This user is from outside of this forum
        aeoncypher@lgbtqia.space
        wrote last edited by
        #40

        @neurovagrant How is that not still the human? Didn't humans decide to let AI run entire systems without anyone watching.
        FFS, Tencent's shares just skyrocketed for saying their deploying OpenClaw which is _known_ to be destructive and have massive security vulnerabilities.

        1 Reply Last reply
        0
        • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

          When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

          I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

          oblomov@sociale.networkO This user is from outside of this forum
          oblomov@sociale.networkO This user is from outside of this forum
          oblomov@sociale.network
          wrote last edited by
          #41

          @neurovagrant and yet arguably the weakest point is still the human that decided to slopcode

          1 Reply Last reply
          0
          • cr0w@infosec.exchangeC cr0w@infosec.exchange

            @neurovagrant

            massive bong rip

            Who decided to deploy the LLMs? It wasn't a computer...

            huronbikes@cyberplace.socialH This user is from outside of this forum
            huronbikes@cyberplace.socialH This user is from outside of this forum
            huronbikes@cyberplace.social
            wrote last edited by
            #42

            @cR0w @neurovagrant "Stop, OpenCaw!"

            1 Reply Last reply
            0
            • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

              When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

              I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

              earthshine@masto.hackers.townE This user is from outside of this forum
              earthshine@masto.hackers.townE This user is from outside of this forum
              earthshine@masto.hackers.town
              wrote last edited by
              #43

              @neurovagrant I mean it's still true. The weakest link is now the human that involves the LLM in the chain.

              1 Reply Last reply
              0
              • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                Thank you to everyone saying "it's still the human."

                No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.

                This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.

                renardboy@mastodon.socialR This user is from outside of this forum
                renardboy@mastodon.socialR This user is from outside of this forum
                renardboy@mastodon.social
                wrote last edited by
                #44

                @neurovagrant Why do you surrender agency so readily?

                We are and remain masters of our world.

                So much of the slopocalypse is shitty CEOs catering to dumb investors who arrogantly yet wrongfully think they know a damn thing about IT. All a very (if deplorably) human thing.

                That said, your post is funny and I like it a lot.

                nieuemma@mastodon.deN 1 Reply Last reply
                0
                • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                  When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

                  I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

                  faduda@mastodon.ieF This user is from outside of this forum
                  faduda@mastodon.ieF This user is from outside of this forum
                  faduda@mastodon.ie
                  wrote last edited by
                  #45

                  @neurovagrant

                  The weakest link is the human who signed off on the LLM

                  1 Reply Last reply
                  0
                  • renardboy@mastodon.socialR renardboy@mastodon.social

                    @neurovagrant Why do you surrender agency so readily?

                    We are and remain masters of our world.

                    So much of the slopocalypse is shitty CEOs catering to dumb investors who arrogantly yet wrongfully think they know a damn thing about IT. All a very (if deplorably) human thing.

                    That said, your post is funny and I like it a lot.

                    nieuemma@mastodon.deN This user is from outside of this forum
                    nieuemma@mastodon.deN This user is from outside of this forum
                    nieuemma@mastodon.de
                    wrote last edited by
                    #46

                    @renardboy @neurovagrant no way. Nobody back home is going to believe me when I tell them I saw an actual bus

                    1 Reply Last reply
                    0
                    • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                      When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

                      I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

                      foriamcj@infosec.exchangeF This user is from outside of this forum
                      foriamcj@infosec.exchangeF This user is from outside of this forum
                      foriamcj@infosec.exchange
                      wrote last edited by
                      #47

                      @neurovagrant

                      ... The "Leader-shit" team that went all in on LLM's?

                      1 Reply Last reply
                      0
                      • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                        When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

                        I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

                        ripp_@chitter.xyzR This user is from outside of this forum
                        ripp_@chitter.xyzR This user is from outside of this forum
                        ripp_@chitter.xyz
                        wrote last edited by
                        #48

                        @neurovagrant I took love how we have made computer suspectable to social engineering.

                        Great job all around guys

                        (Sarcastic)

                        1 Reply Last reply
                        0
                        • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                          When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

                          I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

                          nagaram@hachyderm.ioN This user is from outside of this forum
                          nagaram@hachyderm.ioN This user is from outside of this forum
                          nagaram@hachyderm.io
                          wrote last edited by
                          #49

                          @neurovagrant

                          Its crazy how little of in issue it would be if

                          1) AI CEOs weren't greedy about training data. So the bots wouldnt siphon corprate and private data to use as training data.

                          2) Openai wouldn't have a feature to make chats visible on the internet.

                          3) Microsoft didn't make a folder filled with screenshots of EVERYTHING YOUVE EVER DONE.

                          And most importantly

                          4) We stopped giving LLMs full fucking access to our computers, networks, and credit card information.

                          Like there's absolutely no reason for them to be such a security risk. These are all things that if they just asked one person who isn't sniffing a Tech CEOs farts all day their opinion.

                          Now we have assholes like Pete Hegseth trying to super glue ChatGPT to a tomahawk missile!

                          1 Reply Last reply
                          0
                          • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                            When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

                            I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

                            computeforloot@twit.socialC This user is from outside of this forum
                            computeforloot@twit.socialC This user is from outside of this forum
                            computeforloot@twit.social
                            wrote last edited by
                            #50

                            @neurovagrant 😂

                            1 Reply Last reply
                            0
                            • phil@fed.bajsicki.comP phil@fed.bajsicki.com

                              @EndlessMason@hachyderm.io @neurovagrant@masto.deoan.org As a sidenote, I've seen things you wouldn't believe in the last few months that has me genuinely convinced that it's humans that made LLMs look bad, rather than LLMs being bad intrinsically (aside from the copyright issues, power drain, freshwater use, global warming, financial abuse, privacy issues, deals with government...).

                              The math models (locally hosted, fitting on gaming GPUs) can be fairly easily be made useful and helpful (a few days of effort after work) in menial tasks that can't be completed deterministically, provided basic oversight. They cost pennies, and they're private.

                              randomdamage@infosec.exchangeR This user is from outside of this forum
                              randomdamage@infosec.exchangeR This user is from outside of this forum
                              randomdamage@infosec.exchange
                              wrote last edited by
                              #51

                              @phil @neurovagrant @EndlessMason you have to be smart enough to do the job without AI to be able to use the current generation of AI effectively and safely.

                              But that's not how it's being sold, and that's not how executives see the situation

                              Which means this whole mess isn't an end user failure (oh, if only the end users were smarter and more attentive, BUT THEY"RE NOT)

                              It's a management failure (not understanding their workers, and not understanding the tools they are making their workers use).

                              1 Reply Last reply
                              0
                              Reply
                              • Reply as topic
                              Log in to reply
                              • Oldest to Newest
                              • Newest to Oldest
                              • Most Votes


                              • Login

                              • Login or register to search.
                              • First post
                                Last post
                              0
                              • Categories
                              • Recent
                              • Tags
                              • Popular
                              • World
                              • Users
                              • Groups