Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. I have to say I find it weird how people who would usually defend stuff like Burp as "legitimate dual use tools" are now criticizing Anthropic for building a dual-use tool and at least trying to mitigate the harms.

I have to say I find it weird how people who would usually defend stuff like Burp as "legitimate dual use tools" are now criticizing Anthropic for building a dual-use tool and at least trying to mitigate the harms.

Scheduled Pinned Locked Moved Uncategorized
glasswing
5 Posts 2 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • hacksilon@infosec.exchangeH This user is from outside of this forum
    hacksilon@infosec.exchangeH This user is from outside of this forum
    hacksilon@infosec.exchange
    wrote last edited by
    #1

    I have to say I find it weird how people who would usually defend stuff like Burp as "legitimate dual use tools" are now criticizing Anthropic for building a dual-use tool and at least trying to mitigate the harms.

    I mean sure, it might just be marketing, or a shakedown of software- and exploit developers alike to get them to burn tokens, or an evil plan to make the world less secure, or something. Might all be the case.

    I personally tend to believe that most people aren't intentionally evil. They aren't sitting in their volcano lair cackling and watching the world burn. And if several people at Anthropic tell us that they are doing #Glasswing as harm reduction from a harm that, yes, their own tools are potentially causing, but that would otherwise be caused by OpenAI models (or DeepSeek, or whatever the next big model will be), then I tend to give them the benefit of the doubt that this is really what they believe, and that harm reduction is really what they want to achieve with this.

    Also, regardless of anything else: I believe it is a good thing if people are spending millions of dollars making open source ecosystems more secure. We could really use that, if it is done responsibly, and working together with the maintainers to give them the tools to find and confirm these issues, and not just dumping thousands of unverified reports on them, is the way I would like to see it done.

    hacksilon@infosec.exchangeH 1 Reply Last reply
    0
    • hacksilon@infosec.exchangeH hacksilon@infosec.exchange

      I have to say I find it weird how people who would usually defend stuff like Burp as "legitimate dual use tools" are now criticizing Anthropic for building a dual-use tool and at least trying to mitigate the harms.

      I mean sure, it might just be marketing, or a shakedown of software- and exploit developers alike to get them to burn tokens, or an evil plan to make the world less secure, or something. Might all be the case.

      I personally tend to believe that most people aren't intentionally evil. They aren't sitting in their volcano lair cackling and watching the world burn. And if several people at Anthropic tell us that they are doing #Glasswing as harm reduction from a harm that, yes, their own tools are potentially causing, but that would otherwise be caused by OpenAI models (or DeepSeek, or whatever the next big model will be), then I tend to give them the benefit of the doubt that this is really what they believe, and that harm reduction is really what they want to achieve with this.

      Also, regardless of anything else: I believe it is a good thing if people are spending millions of dollars making open source ecosystems more secure. We could really use that, if it is done responsibly, and working together with the maintainers to give them the tools to find and confirm these issues, and not just dumping thousands of unverified reports on them, is the way I would like to see it done.

      hacksilon@infosec.exchangeH This user is from outside of this forum
      hacksilon@infosec.exchangeH This user is from outside of this forum
      hacksilon@infosec.exchange
      wrote last edited by
      #2

      For the record, I just pointed Claude Code with Opus 4.6 at an Open Source project with >35k Github stars, and it found a TOC/TOU DNS Rebinding vulnerability within 5 Minutes. These models are already here. Regardless of how we feel about it, we have to deal with it. (I verified that the vulnerability works and reported it, which took roughly an hour, i.e., a hell of a lot longer than finding it, but I'm not going to contribute to the slop problem myself)

      hacksilon@infosec.exchangeH 1 Reply Last reply
      0
      • hacksilon@infosec.exchangeH hacksilon@infosec.exchange

        For the record, I just pointed Claude Code with Opus 4.6 at an Open Source project with >35k Github stars, and it found a TOC/TOU DNS Rebinding vulnerability within 5 Minutes. These models are already here. Regardless of how we feel about it, we have to deal with it. (I verified that the vulnerability works and reported it, which took roughly an hour, i.e., a hell of a lot longer than finding it, but I'm not going to contribute to the slop problem myself)

        hacksilon@infosec.exchangeH This user is from outside of this forum
        hacksilon@infosec.exchangeH This user is from outside of this forum
        hacksilon@infosec.exchange
        wrote last edited by
        #3

        Aaaand it looks like I'll have to eat my words. Even though it's deeply embarrassing, I'll post it to keep myself honest.

        The vulnerability I found was not actually valid. I thought I had produced a working PoC, but I used a weak heuristic and the perceived confirmation hinged on a misreading of a log entry, with some confirmation bias at play. It turns out that the code was not actually insecure, and a closer reading of the code would have shown this to be the case.

        So, yay, I *did* contribute to the slop problem after all. I apologized to the maintainer and sent them a tip via GitHub Sponsors as a further apology.

        Lessons learned: There is no replacement for a proper PoC, and for critically reading the code yourself (which, again, I actually did, but was blinded by confirmation bias). Plus, I should probably not be doing this kind of thing while I have a cold.

        Also, if I had insisted on attacking this problem with more use of LLMs, this whole thing could have probably been prevented by having the LLM write the finding to a file, opening a second instance without the chat history, and ask it to confirm or disprove the finding based on code analysis. (Which is exactly what Anthropic does in their own work as a first line of defense against false positives, even before a human looks at it.)

        So, yeah, Opus 4.6 is quite good, but definitely still not good enough to trust without a good test harness and forcing it to produce a working PoC.

        hillu@infosec.exchangeH 1 Reply Last reply
        0
        • hacksilon@infosec.exchangeH hacksilon@infosec.exchange

          Aaaand it looks like I'll have to eat my words. Even though it's deeply embarrassing, I'll post it to keep myself honest.

          The vulnerability I found was not actually valid. I thought I had produced a working PoC, but I used a weak heuristic and the perceived confirmation hinged on a misreading of a log entry, with some confirmation bias at play. It turns out that the code was not actually insecure, and a closer reading of the code would have shown this to be the case.

          So, yay, I *did* contribute to the slop problem after all. I apologized to the maintainer and sent them a tip via GitHub Sponsors as a further apology.

          Lessons learned: There is no replacement for a proper PoC, and for critically reading the code yourself (which, again, I actually did, but was blinded by confirmation bias). Plus, I should probably not be doing this kind of thing while I have a cold.

          Also, if I had insisted on attacking this problem with more use of LLMs, this whole thing could have probably been prevented by having the LLM write the finding to a file, opening a second instance without the chat history, and ask it to confirm or disprove the finding based on code analysis. (Which is exactly what Anthropic does in their own work as a first line of defense against false positives, even before a human looks at it.)

          So, yeah, Opus 4.6 is quite good, but definitely still not good enough to trust without a good test harness and forcing it to produce a working PoC.

          hillu@infosec.exchangeH This user is from outside of this forum
          hillu@infosec.exchangeH This user is from outside of this forum
          hillu@infosec.exchange
          wrote last edited by
          #4

          @hacksilon Sooooo… Dan Guido's claim that Trail Of Bits achieved 200 bugs per week and AI-assisted human should probably be taken with a grain of salt or so.

          hacksilon@infosec.exchangeH 1 Reply Last reply
          1
          0
          • R relay@relay.infosec.exchange shared this topic
          • hillu@infosec.exchangeH hillu@infosec.exchange

            @hacksilon Sooooo… Dan Guido's claim that Trail Of Bits achieved 200 bugs per week and AI-assisted human should probably be taken with a grain of salt or so.

            hacksilon@infosec.exchangeH This user is from outside of this forum
            hacksilon@infosec.exchangeH This user is from outside of this forum
            hacksilon@infosec.exchange
            wrote last edited by
            #5

            @hillu Not necessarily. I would assume that trail of bits does a better job building PoC's than I do on my sofa with a headache from a cold. Anything else would deeply surprise me, as I have found their work to be extremely professional and thorough so far.

            Also, "AI enabled us to find 200 bugs per week" does not equal "AI found 200 bugs that we then confirmed." Anecdotally, the greatest help AI has been to me in my security work has been "here's an entire codebase that I know nothing about, written with a framework I'm not familiar with. I know that somewhere in there is the place where [feature X] is implemented. Find this place for me. Then explain line by line how [specific thing] works.". So, it reduces the overhead of having to trawl through 500 Java files and follow 8 references to finally get to the point where the thing you actually want to know is buried.

            I'm pretty sure use of AI will not take me from 15 to 200 bugs a week, but also, I am not a pentester (my focus is on security architecture), so, no idea what it's like for Trail of Bits. I would not use what I did today as evidence that the statement is false or misleading.

            1 Reply Last reply
            1
            0
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


            • Login

            • Login or register to search.
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • World
            • Users
            • Groups