@http_error_418 @bagder
The original Coverity paper claimed, as I recall, 300 CVEs. I'm not sure what the severity distribution was, but that seems a lot more than Mythos, and they probably used less compute than a single Mythos query.
The problem with any static analyser, whether it's based on formal reasoning or pattern recognition, is that it will be unsound (i.e. it will have false positives, in contrast with dynamic analyses that are incomplete and have false negatives). The LLM-based tools are no different in this respect. From a Claude 'comprehensive code review' of one of my projects, the only serious bug in the top ten that it found was one that already had an open PR to fix, and two were not only not bugs, they were intentional design choices and doing it the other way would have caused serious performance regressions (and not fixed bugs).
The thing that does make Mythos different is that it tries to build a PoC exploit. This will reduce the false positive rate, at the expense of creating false negatives (if it can't produce a PoC, you ignore it).
When I've used Coverity on a large project, it's found tens of thousands of bugs, and most of them are false positives, so it requires a lot of effort to find the ones that are actually important bugs. Something that produces PoCs automatically would help this a lot.
The baseline data point I'd really like to see is something that integrates the clang analyser with libFuzzer. For each report the analyser finds, insert profiling points at the branches on the control flow chain that it recommends, then automatically drive the fuzzer to try to trigger the code paths that the analyser reported as potential issues.
The default settings for the clang analyser are compilation-unit-at-a-time and with reduced bounds on loop iteration counts to avoid using enormous amounts of memory. If you're willing to spend as much money as it costs to operate the LLM-based tools, you can use the cross-compilation-unit approaches and bump the state up a lot. Running it configured to use a comparable amount of RAM to the GPUs that the Anthropic models run on would let you do a lot of symbolic execution.