Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I’m willing to believe that Anthropic built a better SAST.

I’m willing to believe that Anthropic built a better SAST.

Scheduled Pinned Locked Moved Uncategorized
6 Posts 2 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • neilmadden@infosec.exchangeN This user is from outside of this forum
    neilmadden@infosec.exchangeN This user is from outside of this forum
    neilmadden@infosec.exchange
    wrote last edited by
    #1

    I’m willing to believe that Anthropic built a better SAST. But that’s a total market of about $5B tops according to Google (some estimates seem to be just $0.5B) – it’s going to take a while to pay off their $30B Series G if they keep targeting these relatively tiny markets.

    The same as with targeting developer productivity (another famously quite small market), they are focused on these markets because there are existing automated “bullshit-corrector” tools. In the case of software development, type checkers, linters, testing frameworks etc. In the case of memory corruption bugs, apparently they leant heavily on ASan to weed out the false positives.

    Anyone who’s ever used a SAST on a mature code base knows that reducing false positives is the number 1 priority.

    Also, in a parallel to recent articles about coding agents, finding vulnerabilities is not the bottleneck.

    neilmadden@infosec.exchangeN 1 Reply Last reply
    0
    • neilmadden@infosec.exchangeN neilmadden@infosec.exchange

      I’m willing to believe that Anthropic built a better SAST. But that’s a total market of about $5B tops according to Google (some estimates seem to be just $0.5B) – it’s going to take a while to pay off their $30B Series G if they keep targeting these relatively tiny markets.

      The same as with targeting developer productivity (another famously quite small market), they are focused on these markets because there are existing automated “bullshit-corrector” tools. In the case of software development, type checkers, linters, testing frameworks etc. In the case of memory corruption bugs, apparently they leant heavily on ASan to weed out the false positives.

      Anyone who’s ever used a SAST on a mature code base knows that reducing false positives is the number 1 priority.

      Also, in a parallel to recent articles about coding agents, finding vulnerabilities is not the bottleneck.

      neilmadden@infosec.exchangeN This user is from outside of this forum
      neilmadden@infosec.exchangeN This user is from outside of this forum
      neilmadden@infosec.exchange
      wrote last edited by
      #2

      To be honest though, with quoted figures of $10-20,000 to find each of these vulns, I don’t think they’re going after the defender market...

      hacksilon@infosec.exchangeH 1 Reply Last reply
      0
      • neilmadden@infosec.exchangeN neilmadden@infosec.exchange

        To be honest though, with quoted figures of $10-20,000 to find each of these vulns, I don’t think they’re going after the defender market...

        hacksilon@infosec.exchangeH This user is from outside of this forum
        hacksilon@infosec.exchangeH This user is from outside of this forum
        hacksilon@infosec.exchange
        wrote last edited by
        #3

        @neilmadden to be fair to them: an entire campaign cost $20k, but each campaign found more than one bug, so the price per bug is much lower. In a talk, one of their researchers said that he's sitting on 100+ high confidence findings from their Linux kernel runs alone that he hasn't yet had the time to verify and report to the maintainers. Of course, that's still a lot of money per bug, no doubt about it, but not quite the $20k you are quoting.

        neilmadden@infosec.exchangeN 1 Reply Last reply
        0
        • hacksilon@infosec.exchangeH hacksilon@infosec.exchange

          @neilmadden to be fair to them: an entire campaign cost $20k, but each campaign found more than one bug, so the price per bug is much lower. In a talk, one of their researchers said that he's sitting on 100+ high confidence findings from their Linux kernel runs alone that he hasn't yet had the time to verify and report to the maintainers. Of course, that's still a lot of money per bug, no doubt about it, but not quite the $20k you are quoting.

          neilmadden@infosec.exchangeN This user is from outside of this forum
          neilmadden@infosec.exchangeN This user is from outside of this forum
          neilmadden@infosec.exchange
          wrote last edited by
          #4

          @hacksilon yeah, for the OpenBSD bug they mention a “few dozen” other findings. But if they were good findings I think they would have said something about them. The fact they just say it as an aside with no elaboration suggests to me these other findings are probably a bit “meh”, but we’ll wait and see. Hopefully we’ll see the full list eventually, once disclosure has run its course.

          hacksilon@infosec.exchangeH 1 Reply Last reply
          0
          • neilmadden@infosec.exchangeN neilmadden@infosec.exchange

            @hacksilon yeah, for the OpenBSD bug they mention a “few dozen” other findings. But if they were good findings I think they would have said something about them. The fact they just say it as an aside with no elaboration suggests to me these other findings are probably a bit “meh”, but we’ll wait and see. Hopefully we’ll see the full list eventually, once disclosure has run its course.

            hacksilon@infosec.exchangeH This user is from outside of this forum
            hacksilon@infosec.exchangeH This user is from outside of this forum
            hacksilon@infosec.exchange
            wrote last edited by
            #5

            @neilmadden yes. According to them that should be in 60+15 days, iirc. The thing that gives me some hope that this isn’t pure marketing is people like Daniel Stenberg reporting that there was a steep increase in the quality of AI-reported issues (https://mastodon.social/@bagder/116362046377975050), although he also says that no one from Glasswing was in touch, so who knows where those are coming from.

            neilmadden@infosec.exchangeN 1 Reply Last reply
            0
            • hacksilon@infosec.exchangeH hacksilon@infosec.exchange

              @neilmadden yes. According to them that should be in 60+15 days, iirc. The thing that gives me some hope that this isn’t pure marketing is people like Daniel Stenberg reporting that there was a steep increase in the quality of AI-reported issues (https://mastodon.social/@bagder/116362046377975050), although he also says that no one from Glasswing was in touch, so who knows where those are coming from.

              neilmadden@infosec.exchangeN This user is from outside of this forum
              neilmadden@infosec.exchangeN This user is from outside of this forum
              neilmadden@infosec.exchange
              wrote last edited by
              #6

              @hacksilon I’m also super interested in how well it generalises to non-memory-safety vulns. How load-bearing is ASan as a quality gate here, and what other classes of vulns have similar oracles?

              1 Reply Last reply
              1
              0
              • R relay@relay.infosec.exchange shared this topic
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups