The AI slop security reporting is basically extinct.
-
@bagder Didn't you share one just 2 days ago though? hackerone.com/reports/3669305
-
I want to emphasize this because when I talk about AI security reports now, half my readers seem to believe those are AI slop. They're not. They are found with AI tools and normally high quality bug reports.
The weakest part is that they tend to overstress the vulnerability angle. Lots of them are well phrased bug reports that are still "just bugs".
@bagder I see
- good ones using AI as part of a rigorous process with replication
- mediocre where someone asked an AI "Find me a CVE", submits the report without review or replication, and yet still expects creditIf "have write access to the filesystem" is a prerequisite to an exploit: it's not an exploit. You already have total ownership of the server
-
The AI slop security reporting is basically extinct. It almost does not happen anymore. At all.
@bagder Can't wait for your next graph

-
I want to emphasize this because when I talk about AI security reports now, half my readers seem to believe those are AI slop. They're not. They are found with AI tools and normally high quality bug reports.
The weakest part is that they tend to overstress the vulnerability angle. Lots of them are well phrased bug reports that are still "just bugs".
@bagder Do reporters share the tools used, or are there strong tool indicators in the reports?
Curious about which tool(s) are most successful, at least for cURL research.
I imagine in most cases reporters don't mention the tools used (especially if custom), which is unfortunate.
-
The AI slop security reporting is basically extinct. It almost does not happen anymore. At all.
@pozorvlak To me, the most interesting part of that thread was this post.
This person considers AI their enemy. But not because it is wasting Stenberg's time. They wanted it to continue to waste Stenberg's time, so that they could continue to hate it more.

-
@pozorvlak To me, the most interesting part of that thread was this post.
This person considers AI their enemy. But not because it is wasting Stenberg's time. They wanted it to continue to waste Stenberg's time, so that they could continue to hate it more.

@pozorvlak Now I think a more reasonable interpretation is: they are concerned about copyright violations, environmental damage, etc., and are dismayed that people like me use AI anyway. The fact of its getting better doesn't fix the other problems, and just means that there are fewer arguments against using it.
(“This is terrible” vs. “This is terrible, maybe when people realise that it doesn't work, they will stop.”)
-
@utopiah probably, but also because the AIs can't really tell
@bagder sure, ironically enough there is no "I" in AI.
-
@pozorvlak Now I think a more reasonable interpretation is: they are concerned about copyright violations, environmental damage, etc., and are dismayed that people like me use AI anyway. The fact of its getting better doesn't fix the other problems, and just means that there are fewer arguments against using it.
(“This is terrible” vs. “This is terrible, maybe when people realise that it doesn't work, they will stop.”)
@mjd I think so. But also, if all AI-generated bug reports are useless, you can stop reading as soon as you've decided a bug report came from an AI.
-
@mjd I think so. But also, if all AI-generated bug reports are useless, you can stop reading as soon as you've decided a bug report came from an AI.
@pozorvlak If that were the reason, wouldn't they want the reports to be as good as possible, and be glad if the reports were all worth reading? But this person says they are disappointed!
-
@pozorvlak If that were the reason, wouldn't they want the reports to be as good as possible, and be glad if the reports were all worth reading? But this person says they are disappointed!
@mjd ah, good point. Reliably bad reports waste a small amount of time, but more than zero. The worst case is reports that are only sometimes good, because then you have to read them all carefully.
-
@pozorvlak Now I think a more reasonable interpretation is: they are concerned about copyright violations, environmental damage, etc., and are dismayed that people like me use AI anyway. The fact of its getting better doesn't fix the other problems, and just means that there are fewer arguments against using it.
(“This is terrible” vs. “This is terrible, maybe when people realise that it doesn't work, they will stop.”)
Yes, it would be nice if we stopped building hell so people can roast a few marshmallows. Marshmallows are nice, but not that nice.
CC: @pozorvlak@mathstodon.xyz
-
I want to emphasize this because when I talk about AI security reports now, half my readers seem to believe those are AI slop. They're not. They are found with AI tools and normally high quality bug reports.
The weakest part is that they tend to overstress the vulnerability angle. Lots of them are well phrased bug reports that are still "just bugs".
@bagder you're lucky. I got 30+ yesterday. 1 was kind of credible. The others were effectively documented behaviors of projects.
There's still little to no consequences for wasting time - I've been thinking about the "name and shame" approach you have, maybe that helps change the behavior? -
@bagder sure, ironically enough there is no "I" in AI.
-
I want to emphasize this because when I talk about AI security reports now, half my readers seem to believe those are AI slop. They're not. They are found with AI tools and normally high quality bug reports.
The weakest part is that they tend to overstress the vulnerability angle. Lots of them are well phrased bug reports that are still "just bugs".
@bagder I wonder how much of that is because you eliminated the bounty
-
@pozorvlak To me, the most interesting part of that thread was this post.
This person considers AI their enemy. But not because it is wasting Stenberg's time. They wanted it to continue to waste Stenberg's time, so that they could continue to hate it more.
@mjd@mathstodon.xyz @pozorvlak@mathstodon.xyz I mean, it’s terrible for the environment, has loads of ethical and moral concerns, and the companies are completely unsustainable. It’s pretty easy to hate
-
The AI slop security reporting is basically extinct. It almost does not happen anymore. At all.
@bagder Unfortunately that hasn't made it to Flask yet, we still get a bunch of AI slop. About 50 reports so far this year, none helpful. Typically we get < 10 per year, some helpful.
-
The AI slop security reporting is basically extinct. It almost does not happen anymore. At all.
@bagder Seems like all you need to do is take away the incentive to get rid of the low effort reports.
Sad they had to ruin it for real reporters now as they don’t get their (deserved) bounty anymore in exchange for the good work they’re doing.
-
R relay@relay.infosec.exchange shared this topic