There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus Big AI is making all AI look bad.
-
@jedimb and the alternative is?
@budududuroiu @mirth @jenniferplusplus What we had just a few years ago.
-
@budududuroiu @mirth @jenniferplusplus What we had just a few years ago.
@jedimb yeah well that ship has sailed long ago.
-
@jedimb yeah well that ship has sailed long ago.
@budududuroiu @mirth @jenniferplusplus "The plague is here. Let's just live with it" does seem to be a recurring sentiment, but it doesn't change that it's a plague.
-
@budududuroiu @mirth @jenniferplusplus "The plague is here. Let's just live with it" does seem to be a recurring sentiment, but it doesn't change that it's a plague.
@jedimb norms are downstream from power. Current power balance is shifted towards frontier labs and hyperscalers, norms around personal computing (RAM prices) and open source software (AI slop floods) are dictated by them.
Moralising AI use with no power to back it up is useless, gatekeeping is power because it says "want to contribute to this project, abide by our rules"
The case for gatekeeping, or: why medieval guilds had it figured out
Every open source maintainer I've talked to in the last six months has the same complaint: the absolute flood of mass-produced, AI-generated, mass-submitted slop requests have turned their repositories into a slush pile. The contributions look like contributions, they have commit messages, they reference issues and they follow templates etc.
Westenberg. (www.joanwestenberg.com)
-
@dngrs Well, you're partly correct, partly wrong. Yes, pretrained transformers are, like all generative models, definitionally modelling a joint probability distribution, and autoregressively generating from that joint probability distribution.
Those are the models you're referring to as autocomplete tools, hence why you had to use `[MASK]` with early transformers like BERT to get them to complete the "most probable token".
Regardless, it doesn't matter what Anthropic did, if it allows for a massive reduction in cost of finding zero days, it's a problem. It doesn't have to be revolutionary, it doesn't have to be superintelligence, AGI, whatever woo-hoo flashy marketing terms. If a reduction in cost of computing protein folding happens, i.e. OpenFold implementation of AlphaFold, that wouldn't be revolutionary, but would still be dangerous, since you now potentially have lone actors being able to make prions at home (I'm using this as an absurd, but probable case).
@budududuroiu @jenniferplusplus it's funny you bring up AlphaFold because that also has been way overhyped, according to people working in the field (I don't have links to individual statements anymore sadly, been a few years but the Wikipedia page also mentions e.g. AF not really understanding folding). Anyway: as long as there is no concrete data regarding severe CVE increase with a causal link to newer LLMs (which again are still LLMs that do not understand facts) I'll keep holding my breath.
-
@jedimb norms are downstream from power. Current power balance is shifted towards frontier labs and hyperscalers, norms around personal computing (RAM prices) and open source software (AI slop floods) are dictated by them.
Moralising AI use with no power to back it up is useless, gatekeeping is power because it says "want to contribute to this project, abide by our rules"
The case for gatekeeping, or: why medieval guilds had it figured out
Every open source maintainer I've talked to in the last six months has the same complaint: the absolute flood of mass-produced, AI-generated, mass-submitted slop requests have turned their repositories into a slush pile. The contributions look like contributions, they have commit messages, they reference issues and they follow templates etc.
Westenberg. (www.joanwestenberg.com)
@budududuroiu @mirth @jenniferplusplus Goal post moved into a different dimension, I see.
-
@budududuroiu @jenniferplusplus it's funny you bring up AlphaFold because that also has been way overhyped, according to people working in the field (I don't have links to individual statements anymore sadly, been a few years but the Wikipedia page also mentions e.g. AF not really understanding folding). Anyway: as long as there is no concrete data regarding severe CVE increase with a causal link to newer LLMs (which again are still LLMs that do not understand facts) I'll keep holding my breath.
@dngrs @jenniferplusplus I'm sorry, I know thinking conceptually isn't easy for everyone, I tried using AlphaFold because some people have an easier time when presented with examples.
Why would there be an increase in CVEs? If I was an actor with nation-state levels of access to compute, why would I waste all that compute on zero days, only to then publish CVEs about them?
Even the most AI skeptic maintainers start to admit that LLMs are getting good at finding bugs. I understand cynicism is seen as cool nowadays but I think it's intellectually lazy
daniel:// stenberg:// (@bagder@mastodon.social)
I ran a quick git log grep just now. Over the last ~6 months or so, we have fixed over 200 bugs in #curl found with "AI tools".
Mastodon (mastodon.social)
-
@dngrs @jenniferplusplus I'm sorry, I know thinking conceptually isn't easy for everyone, I tried using AlphaFold because some people have an easier time when presented with examples.
Why would there be an increase in CVEs? If I was an actor with nation-state levels of access to compute, why would I waste all that compute on zero days, only to then publish CVEs about them?
Even the most AI skeptic maintainers start to admit that LLMs are getting good at finding bugs. I understand cynicism is seen as cool nowadays but I think it's intellectually lazy
daniel:// stenberg:// (@bagder@mastodon.social)
I ran a quick git log grep just now. Over the last ~6 months or so, we have fixed over 200 bugs in #curl found with "AI tools".
Mastodon (mastodon.social)
@budududuroiu holy condescension Batman lol, no thank you
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus 37th time's the charm! This time *for real*.
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus The issue is that big enough corpos don't care about code quality anymore, and they don't care about vulnerabilities being there for months (years sometimes) or leaks. Nobody care about these anymore.. they want results fast to sell quick and move on.
-
@jenniferplusplus Open AI made similar claims about their model being so good it was dangerous and they weren't going to release it. In 2019. https://techcrunch.com/2019/02/17/openai-text-generator-dangerous/
@fancysandwiches oh wow, a headline that describes these things as text generators.
How far we've fallen
-
@dngrs @jenniferplusplus I'm sorry, I know thinking conceptually isn't easy for everyone, I tried using AlphaFold because some people have an easier time when presented with examples.
Why would there be an increase in CVEs? If I was an actor with nation-state levels of access to compute, why would I waste all that compute on zero days, only to then publish CVEs about them?
Even the most AI skeptic maintainers start to admit that LLMs are getting good at finding bugs. I understand cynicism is seen as cool nowadays but I think it's intellectually lazy
daniel:// stenberg:// (@bagder@mastodon.social)
I ran a quick git log grep just now. Over the last ~6 months or so, we have fixed over 200 bugs in #curl found with "AI tools".
Mastodon (mastodon.social)
@budududuroiu @dngrs you may as well stop, you're not going to convince me to trust them. Only anthropic can do that, because they have truly earned my distrust.
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus Agree that it is mostly for marketing & investors.
But the article was technical enough, that I think there is an improvement here that no other model has. And if true, it would be great for vulnerability scanning/hardening in general (bad that attackers would have access to it though).
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus Worth a follow for that post alone. Hi, I'm Bill.


-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus "our magic machine found a 30 year old security vulnerability!"
OK, what's the CVE link? These companies never show proof besides saying "it totally did the thing, you guyzzz plz giv moar billionz"
-
@jenniferplusplus I seriously doubt this is smoke and mirrors, recent models have improved significantly for cybersec and the industry is noticing:
daniel:// stenberg:// (@bagder@mastodon.social)
The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense.
Mastodon (mastodon.social)
Linux kernel czar says AI bug reports aren't slop anymore
Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away
(www.theregister.com)
The industry consensus seems to be that there's going to be a torrent of vulnerabilities being found in all sorts of software, and they're not prepared to handle the blast radius. It's not surprising that Anthropic wants to give a select few a head start to tackle them. It would be nice if their token fund was open to all OSS projects to apply.
I'm also pressing "X doubt" that you spend months coordinating between AWS, Apple, Microsoft, Google, and the Linux Foundation to organise this just because your tool's code leaked online.
@budududuroiu @jenniferplusplus Let's talk about JavaScript. Have you ever looked at your browser's developer console? On any major website on the planet, there are 8 trillion errors in every one. Two-thirds of them are vulnerabilities, but none of them are exploitable or matter for anything at all. That is what is being found.
Those kinds of errors I've been reviewing, all the ones Daniel's been reviewing too, and I'm seeing it over and over. "Yes, okay, technically that is the buffer overrun, but it doesn't matter because you can't ever get to it!"
-
@jenniferplusplus It's also important that to whatever extent this product actually works (I'm as skeptical as you are), it fundamentally preferences the attacker. The product has way too many false positives to run in CI, so the defender can only use it as part of an occasional audit. The attacker doesn't care about CI or development friction, and wins by finding one exploit in an entire stack, even if they have to wade through many false positives to find it.
@jedbrown @jenniferplusplus The asymmetry is the core thing that concerns me. I can say that empirically starting somewhere last year LLM-assisted bug hunting started to be effective. The false positives are avoidable but the cost of remediation has not gone down with the cost of exploits. This new model may make the situation worse but we're already in it.
-
A couple people seem very invested in me being wrong about this assessment. All I can say is that this would be the first time I have misclassified an AI claim as bullshit
So here's the other thing that bothers me about all this. Regardless of the eventual results, this thing they're doing is *incredibly* resource intensive. They routinely spend billions of dollars on training these models, and billions more on operating them. It's not simple to parse out what fraction of that is directly attributable to the massive scale vuln finder/fabricator. But for the sake of argument lets just pick a plausible number, and call it 50-100 million dollars.
What could we have gotten for 50-100 million dollars of sponsorship for security audits? Prior to this, the largest single investment into FOSS security I'm aware of was the 2015 audit of openssl, after the heartbleed incident. It's hard to find precise costs for that, but I found a few sources estimating 1.2 million dollars, and that is arguably the most security critical piece of software in the world.
But suddenly there's 100x more resources available to do this work, now that producing the artifact can be done with stolen labor? Now that they can externalize the cost of false positives onto the already mostly unpaid maintainers of these projects? Even if their claims are true, which we have no reason to believe and very good reason not to, it's still a travesty
-
So here's the other thing that bothers me about all this. Regardless of the eventual results, this thing they're doing is *incredibly* resource intensive. They routinely spend billions of dollars on training these models, and billions more on operating them. It's not simple to parse out what fraction of that is directly attributable to the massive scale vuln finder/fabricator. But for the sake of argument lets just pick a plausible number, and call it 50-100 million dollars.
What could we have gotten for 50-100 million dollars of sponsorship for security audits? Prior to this, the largest single investment into FOSS security I'm aware of was the 2015 audit of openssl, after the heartbleed incident. It's hard to find precise costs for that, but I found a few sources estimating 1.2 million dollars, and that is arguably the most security critical piece of software in the world.
But suddenly there's 100x more resources available to do this work, now that producing the artifact can be done with stolen labor? Now that they can externalize the cost of false positives onto the already mostly unpaid maintainers of these projects? Even if their claims are true, which we have no reason to believe and very good reason not to, it's still a travesty
