Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

Scheduled Pinned Locked Moved Uncategorized
51 Posts 35 Posters 49 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

    When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

    I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

    sarah@phpc.socialS This user is from outside of this forum
    sarah@phpc.socialS This user is from outside of this forum
    sarah@phpc.social
    wrote last edited by
    #32

    @neurovagrant it still is the human. They just changed how they break things. Instead of breaking things themselves they trust a machine that does it.

    1 Reply Last reply
    0
    • jae@mastodon.bsd.cafeJ jae@mastodon.bsd.cafe

      @phil @neurovagrant @EndlessMason similar experience. humans can drive these models if they have a decent engineering/security understanding. i've got no issue with leveraging it to offload tedious tasks and operational burden.

      but to your point on the human factor, there's been a lot of footgunning lately. even with principal staff getting lazy.

      running models on a ada4000-20gb works pretty nicely and way less power use than a dc or some 5090 monster i need a new circuit for

      phil@fed.bajsicki.comP This user is from outside of this forum
      phil@fed.bajsicki.comP This user is from outside of this forum
      phil@fed.bajsicki.com
      wrote last edited by
      #33

      @jae@mastodon.bsd.cafe @neurovagrant@masto.deoan.org @EndlessMason@hachyderm.io
      I just give the LLM some tools to read my journals, and then type my notes into my note git repo in a separate place.

      https://codeberg.org/bajsicki/gptel-got

      I've a bunch of re-writes locally, but they're not ready to be out in public yet until I test more and gain confidence.

      jae@mastodon.bsd.cafeJ 1 Reply Last reply
      0
      • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

        When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

        I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

        neurovagrant@masto.deoan.orgN This user is from outside of this forum
        neurovagrant@masto.deoan.orgN This user is from outside of this forum
        neurovagrant@masto.deoan.org
        wrote last edited by
        #34

        Thank you to everyone saying "it's still the human."

        No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.

        This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.

        jztusk@mastodon.socialJ tindrasgrove@infosec.exchangeT aeoncypher@lgbtqia.spaceA renardboy@mastodon.socialR 4 Replies Last reply
        0
        • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

          When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

          I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

          tuban_muzuru@ohai.socialT This user is from outside of this forum
          tuban_muzuru@ohai.socialT This user is from outside of this forum
          tuban_muzuru@ohai.social
          wrote last edited by
          #35

          @neurovagrant

          Turns out the weakest link was just waiting for a better prompt.

          1 Reply Last reply
          0
          • phil@fed.bajsicki.comP phil@fed.bajsicki.com

            @jae@mastodon.bsd.cafe @neurovagrant@masto.deoan.org @EndlessMason@hachyderm.io
            I just give the LLM some tools to read my journals, and then type my notes into my note git repo in a separate place.

            https://codeberg.org/bajsicki/gptel-got

            I've a bunch of re-writes locally, but they're not ready to be out in public yet until I test more and gain confidence.

            jae@mastodon.bsd.cafeJ This user is from outside of this forum
            jae@mastodon.bsd.cafeJ This user is from outside of this forum
            jae@mastodon.bsd.cafe
            wrote last edited by
            #36

            @phil @neurovagrant @EndlessMason that's really clever. i had a pile of links from the last 2 years. dedupe + sort + relevance tagging took ~10 minutes which would have taken me a frustrating couple of days.

            i like how you're clear on the disclaimer. i've seen others tout their tool as "military-grade secure" and i fall back out of my chair

            1 Reply Last reply
            0
            • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

              When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

              I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

              fennix@infosec.spaceF This user is from outside of this forum
              fennix@infosec.spaceF This user is from outside of this forum
              fennix@infosec.space
              wrote last edited by
              #37

              @neurovagrant

              It's still a human, it's just shifted to the decision-making ones that mandate use of these systems.

              1 Reply Last reply
              0
              • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                Thank you to everyone saying "it's still the human."

                No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.

                This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.

                jztusk@mastodon.socialJ This user is from outside of this forum
                jztusk@mastodon.socialJ This user is from outside of this forum
                jztusk@mastodon.social
                wrote last edited by
                #38

                @neurovagrant

                I'd say it's still a human. But it's not the user, it's the product deployer.

                In my worldview, responsibility always, and only, lands on humans

                1 Reply Last reply
                0
                • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                  Thank you to everyone saying "it's still the human."

                  No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.

                  This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.

                  tindrasgrove@infosec.exchangeT This user is from outside of this forum
                  tindrasgrove@infosec.exchangeT This user is from outside of this forum
                  tindrasgrove@infosec.exchange
                  wrote last edited by
                  #39

                  @neurovagrant one of these days I need to sit down and write a blog post about how I have a blade that is cheap as hell, but more safe than any other blade I’ve owned, and how that relates to… everything.

                  1 Reply Last reply
                  0
                  • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                    Thank you to everyone saying "it's still the human."

                    No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.

                    This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.

                    aeoncypher@lgbtqia.spaceA This user is from outside of this forum
                    aeoncypher@lgbtqia.spaceA This user is from outside of this forum
                    aeoncypher@lgbtqia.space
                    wrote last edited by
                    #40

                    @neurovagrant How is that not still the human? Didn't humans decide to let AI run entire systems without anyone watching.
                    FFS, Tencent's shares just skyrocketed for saying their deploying OpenClaw which is _known_ to be destructive and have massive security vulnerabilities.

                    1 Reply Last reply
                    0
                    • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                      When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

                      I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

                      oblomov@sociale.networkO This user is from outside of this forum
                      oblomov@sociale.networkO This user is from outside of this forum
                      oblomov@sociale.network
                      wrote last edited by
                      #41

                      @neurovagrant and yet arguably the weakest point is still the human that decided to slopcode

                      1 Reply Last reply
                      0
                      • cr0w@infosec.exchangeC cr0w@infosec.exchange

                        @neurovagrant

                        massive bong rip

                        Who decided to deploy the LLMs? It wasn't a computer...

                        huronbikes@cyberplace.socialH This user is from outside of this forum
                        huronbikes@cyberplace.socialH This user is from outside of this forum
                        huronbikes@cyberplace.social
                        wrote last edited by
                        #42

                        @cR0w @neurovagrant "Stop, OpenCaw!"

                        1 Reply Last reply
                        0
                        • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                          When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

                          I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

                          earthshine@masto.hackers.townE This user is from outside of this forum
                          earthshine@masto.hackers.townE This user is from outside of this forum
                          earthshine@masto.hackers.town
                          wrote last edited by
                          #43

                          @neurovagrant I mean it's still true. The weakest link is now the human that involves the LLM in the chain.

                          1 Reply Last reply
                          0
                          • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                            Thank you to everyone saying "it's still the human."

                            No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.

                            This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.

                            renardboy@mastodon.socialR This user is from outside of this forum
                            renardboy@mastodon.socialR This user is from outside of this forum
                            renardboy@mastodon.social
                            wrote last edited by
                            #44

                            @neurovagrant Why do you surrender agency so readily?

                            We are and remain masters of our world.

                            So much of the slopocalypse is shitty CEOs catering to dumb investors who arrogantly yet wrongfully think they know a damn thing about IT. All a very (if deplorably) human thing.

                            That said, your post is funny and I like it a lot.

                            nieuemma@mastodon.deN 1 Reply Last reply
                            0
                            • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                              When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

                              I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

                              faduda@mastodon.ieF This user is from outside of this forum
                              faduda@mastodon.ieF This user is from outside of this forum
                              faduda@mastodon.ie
                              wrote last edited by
                              #45

                              @neurovagrant

                              The weakest link is the human who signed off on the LLM

                              1 Reply Last reply
                              0
                              • renardboy@mastodon.socialR renardboy@mastodon.social

                                @neurovagrant Why do you surrender agency so readily?

                                We are and remain masters of our world.

                                So much of the slopocalypse is shitty CEOs catering to dumb investors who arrogantly yet wrongfully think they know a damn thing about IT. All a very (if deplorably) human thing.

                                That said, your post is funny and I like it a lot.

                                nieuemma@mastodon.deN This user is from outside of this forum
                                nieuemma@mastodon.deN This user is from outside of this forum
                                nieuemma@mastodon.de
                                wrote last edited by
                                #46

                                @renardboy @neurovagrant no way. Nobody back home is going to believe me when I tell them I saw an actual bus

                                1 Reply Last reply
                                0
                                • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                                  When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

                                  I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

                                  foriamcj@infosec.exchangeF This user is from outside of this forum
                                  foriamcj@infosec.exchangeF This user is from outside of this forum
                                  foriamcj@infosec.exchange
                                  wrote last edited by
                                  #47

                                  @neurovagrant

                                  ... The "Leader-shit" team that went all in on LLM's?

                                  1 Reply Last reply
                                  0
                                  • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                                    When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

                                    I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

                                    ripp_@chitter.xyzR This user is from outside of this forum
                                    ripp_@chitter.xyzR This user is from outside of this forum
                                    ripp_@chitter.xyz
                                    wrote last edited by
                                    #48

                                    @neurovagrant I took love how we have made computer suspectable to social engineering.

                                    Great job all around guys

                                    (Sarcastic)

                                    1 Reply Last reply
                                    0
                                    • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                                      When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

                                      I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

                                      nagaram@hachyderm.ioN This user is from outside of this forum
                                      nagaram@hachyderm.ioN This user is from outside of this forum
                                      nagaram@hachyderm.io
                                      wrote last edited by
                                      #49

                                      @neurovagrant

                                      Its crazy how little of in issue it would be if

                                      1) AI CEOs weren't greedy about training data. So the bots wouldnt siphon corprate and private data to use as training data.

                                      2) Openai wouldn't have a feature to make chats visible on the internet.

                                      3) Microsoft didn't make a folder filled with screenshots of EVERYTHING YOUVE EVER DONE.

                                      And most importantly

                                      4) We stopped giving LLMs full fucking access to our computers, networks, and credit card information.

                                      Like there's absolutely no reason for them to be such a security risk. These are all things that if they just asked one person who isn't sniffing a Tech CEOs farts all day their opinion.

                                      Now we have assholes like Pete Hegseth trying to super glue ChatGPT to a tomahawk missile!

                                      1 Reply Last reply
                                      0
                                      • neurovagrant@masto.deoan.orgN neurovagrant@masto.deoan.org

                                        When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."

                                        I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.

                                        computeforloot@twit.socialC This user is from outside of this forum
                                        computeforloot@twit.socialC This user is from outside of this forum
                                        computeforloot@twit.social
                                        wrote last edited by
                                        #50

                                        @neurovagrant 😂

                                        1 Reply Last reply
                                        0
                                        • phil@fed.bajsicki.comP phil@fed.bajsicki.com

                                          @EndlessMason@hachyderm.io @neurovagrant@masto.deoan.org As a sidenote, I've seen things you wouldn't believe in the last few months that has me genuinely convinced that it's humans that made LLMs look bad, rather than LLMs being bad intrinsically (aside from the copyright issues, power drain, freshwater use, global warming, financial abuse, privacy issues, deals with government...).

                                          The math models (locally hosted, fitting on gaming GPUs) can be fairly easily be made useful and helpful (a few days of effort after work) in menial tasks that can't be completed deterministically, provided basic oversight. They cost pennies, and they're private.

                                          randomdamage@infosec.exchangeR This user is from outside of this forum
                                          randomdamage@infosec.exchangeR This user is from outside of this forum
                                          randomdamage@infosec.exchange
                                          wrote last edited by
                                          #51

                                          @phil @neurovagrant @EndlessMason you have to be smart enough to do the job without AI to be able to use the current generation of AI effectively and safely.

                                          But that's not how it's being sold, and that's not how executives see the situation

                                          Which means this whole mess isn't an end user failure (oh, if only the end users were smarter and more attentive, BUT THEY"RE NOT)

                                          It's a management failure (not understanding their workers, and not understanding the tools they are making their workers use).

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups