Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs.

I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs.

Scheduled Pinned Locked Moved Uncategorized
12 Posts 6 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • hsivonen@mastodon.socialH This user is from outside of this forum
    hsivonen@mastodon.socialH This user is from outside of this forum
    hsivonen@mastodon.social
    wrote last edited by
    #1

    I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs.

    I get it that when folks have concluded that LLMs are harmful, they want to believe that LLMs fail at everything. But a list of correctly-identified bad things about LLMs does not logically imply that LLMs can’t find security bugs.

    hsivonen@mastodon.socialH turre@mementomori.socialT sayrer@mastodon.socialS marshray@infosec.exchangeM 4 Replies Last reply
    0
    • hsivonen@mastodon.socialH hsivonen@mastodon.social

      I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs.

      I get it that when folks have concluded that LLMs are harmful, they want to believe that LLMs fail at everything. But a list of correctly-identified bad things about LLMs does not logically imply that LLMs can’t find security bugs.

      hsivonen@mastodon.socialH This user is from outside of this forum
      hsivonen@mastodon.socialH This user is from outside of this forum
      hsivonen@mastodon.social
      wrote last edited by
      #2

      And, yes, the Anthropic Mythos post fits a previously-seen pattern of “AI” companies marketing by danger, but saying that it’s marketing does not refute what the models that are already generally offered can do.

      And people act like their own conjecture is more informative than what people from multiple projects that deal with security bug reports say. See e.g. https://mastodon.social/@bagder/116363034479757682 .

      hsivonen@mastodon.socialH 1 Reply Last reply
      1
      0
      • hsivonen@mastodon.socialH hsivonen@mastodon.social

        And, yes, the Anthropic Mythos post fits a previously-seen pattern of “AI” companies marketing by danger, but saying that it’s marketing does not refute what the models that are already generally offered can do.

        And people act like their own conjecture is more informative than what people from multiple projects that deal with security bug reports say. See e.g. https://mastodon.social/@bagder/116363034479757682 .

        hsivonen@mastodon.socialH This user is from outside of this forum
        hsivonen@mastodon.socialH This user is from outside of this forum
        hsivonen@mastodon.social
        wrote last edited by
        #3

        Then there’s the dismissal that, yes, LLMs now find security bugs, but the bugs could have been found by other methods. But evidently defenders hadn’t actually found them by other methods. (Unknown what attackers had already found.)

        Or folks find it objectionable that the new capability has been made available to attackers and the proposed cure is to pay for access to the same LLM. But that does make the existence of the capability untrue.

        hsivonen@mastodon.socialH gabrielesvelto@mas.toG 2 Replies Last reply
        1
        0
        • hsivonen@mastodon.socialH hsivonen@mastodon.social

          Then there’s the dismissal that, yes, LLMs now find security bugs, but the bugs could have been found by other methods. But evidently defenders hadn’t actually found them by other methods. (Unknown what attackers had already found.)

          Or folks find it objectionable that the new capability has been made available to attackers and the proposed cure is to pay for access to the same LLM. But that does make the existence of the capability untrue.

          hsivonen@mastodon.socialH This user is from outside of this forum
          hsivonen@mastodon.socialH This user is from outside of this forum
          hsivonen@mastodon.social
          wrote last edited by
          #4

          Or folks go LOL at security incidents or code quality at an LLM company. Irrelevant to whether their model can find security bugs. The way this works is that you have a non-LLM oracle like ASAN. If the model found a way to trigger the oracle, then it’s not really productive to argue that it didn’t.

          Why even post this considering the predictable hate? Because denial about the situation does not make users safer from attacks.

          1 Reply Last reply
          1
          0
          • hsivonen@mastodon.socialH hsivonen@mastodon.social

            I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs.

            I get it that when folks have concluded that LLMs are harmful, they want to believe that LLMs fail at everything. But a list of correctly-identified bad things about LLMs does not logically imply that LLMs can’t find security bugs.

            turre@mementomori.socialT This user is from outside of this forum
            turre@mementomori.socialT This user is from outside of this forum
            turre@mementomori.social
            wrote last edited by
            #5

            @hsivonen Well. When those companies have touted and pushed their AI thingies at a thousand things they're unsuited for, that kinda sets the expectations.

            Most of us are just so bloody fucken tired of hearing AI AI AI AI everywhere. You tone it out or go crazy. And so even the one thing it might be actually good at goes missed because folks are no longer listening. It's all so fantastically stupid.

            1 Reply Last reply
            0
            • hsivonen@mastodon.socialH hsivonen@mastodon.social

              I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs.

              I get it that when folks have concluded that LLMs are harmful, they want to believe that LLMs fail at everything. But a list of correctly-identified bad things about LLMs does not logically imply that LLMs can’t find security bugs.

              sayrer@mastodon.socialS This user is from outside of this forum
              sayrer@mastodon.socialS This user is from outside of this forum
              sayrer@mastodon.social
              wrote last edited by
              #6

              @hsivonen if you haven't run it on your own code, you're missing out. once you do that, it's hard to argue about it.

              1 Reply Last reply
              0
              • hsivonen@mastodon.socialH hsivonen@mastodon.social

                Then there’s the dismissal that, yes, LLMs now find security bugs, but the bugs could have been found by other methods. But evidently defenders hadn’t actually found them by other methods. (Unknown what attackers had already found.)

                Or folks find it objectionable that the new capability has been made available to attackers and the proposed cure is to pay for access to the same LLM. But that does make the existence of the capability untrue.

                gabrielesvelto@mas.toG This user is from outside of this forum
                gabrielesvelto@mas.toG This user is from outside of this forum
                gabrielesvelto@mas.to
                wrote last edited by
                #7

                @hsivonen isn't fuzzing a number game though? LLMs are fuzzers backed by billions, they'll absolutely find something, but so would everything else given the same resources and no restrain on how to spend them, no matter how wasteful.

                freddy@social.security.plumbingF 1 Reply Last reply
                0
                • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                  @hsivonen isn't fuzzing a number game though? LLMs are fuzzers backed by billions, they'll absolutely find something, but so would everything else given the same resources and no restrain on how to spend them, no matter how wasteful.

                  freddy@social.security.plumbingF This user is from outside of this forum
                  freddy@social.security.plumbingF This user is from outside of this forum
                  freddy@social.security.plumbing
                  wrote last edited by
                  #8

                  @gabrielesvelto @hsivonen not really. Some bugs are truly hard to find with fuzzing and are more easily identified by seeing codesmell and trying to trace it back to user. Reading and remembering code is limited by brain power / will power. As sad as it is: LLMs scale better here.

                  hsivonen@mastodon.socialH 1 Reply Last reply
                  0
                  • freddy@social.security.plumbingF freddy@social.security.plumbing

                    @gabrielesvelto @hsivonen not really. Some bugs are truly hard to find with fuzzing and are more easily identified by seeing codesmell and trying to trace it back to user. Reading and remembering code is limited by brain power / will power. As sad as it is: LLMs scale better here.

                    hsivonen@mastodon.socialH This user is from outside of this forum
                    hsivonen@mastodon.socialH This user is from outside of this forum
                    hsivonen@mastodon.social
                    wrote last edited by
                    #9

                    @freddy @gabrielesvelto Also, it looks to me that fuzzing requires more human setup of what part of code to fuzz and how to deal with stuff like checksums whereas reportedly LLMs can deal with less specific harnesses and figure out how to fill in checksums.

                    gabrielesvelto@mas.toG 1 Reply Last reply
                    0
                    • hsivonen@mastodon.socialH hsivonen@mastodon.social

                      @freddy @gabrielesvelto Also, it looks to me that fuzzing requires more human setup of what part of code to fuzz and how to deal with stuff like checksums whereas reportedly LLMs can deal with less specific harnesses and figure out how to fill in checksums.

                      gabrielesvelto@mas.toG This user is from outside of this forum
                      gabrielesvelto@mas.toG This user is from outside of this forum
                      gabrielesvelto@mas.to
                      wrote last edited by
                      #10

                      @hsivonen @freddy yeah, but we're talking resources here. How much fuzzing and analysis would a few billion $ buy? A few 10s of billions? Remember that the total capex behind these technologies over the past three years is now in the 13-digits range. Spend that money on anything and it will fly.

                      freddy@social.security.plumbingF 1 Reply Last reply
                      1
                      0
                      • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                        @hsivonen @freddy yeah, but we're talking resources here. How much fuzzing and analysis would a few billion $ buy? A few 10s of billions? Remember that the total capex behind these technologies over the past three years is now in the 13-digits range. Spend that money on anything and it will fly.

                        freddy@social.security.plumbingF This user is from outside of this forum
                        freddy@social.security.plumbingF This user is from outside of this forum
                        freddy@social.security.plumbing
                        wrote last edited by
                        #11

                        @gabrielesvelto @hsivonen yep this is still largely subsidized by cheap inference and essentially free training (for the consumer). I don’t bet on it staying this cheap.

                        1 Reply Last reply
                        0
                        • hsivonen@mastodon.socialH hsivonen@mastodon.social

                          I’m seeing a lot of denial and logical fallacies on Mastodon about LLM capability to find security bugs.

                          I get it that when folks have concluded that LLMs are harmful, they want to believe that LLMs fail at everything. But a list of correctly-identified bad things about LLMs does not logically imply that LLMs can’t find security bugs.

                          marshray@infosec.exchangeM This user is from outside of this forum
                          marshray@infosec.exchangeM This user is from outside of this forum
                          marshray@infosec.exchange
                          wrote last edited by
                          #12

                          @hsivonen ”Quick, get the torches and pitchforks!
                          Someone suggested that LLMs could in some way be useful.”

                          1 Reply Last reply
                          0
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups