There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus they're all liars and scammers and somehow a lot of people who are aware of this aren't bothered by it at all. It's perplexing and pretty much kills any hope I have of changing people's views.
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus As too-online millennials would say: “x to doubt”.
Or, more politely: “extraordinary claims require extraordinary evidence”.
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus I've increasingly come back to the idea of "post pics or it didn't happen". I mean genai was supposed to put me out of the job in six months ... For 4 years at this point
-
R relay@relay.infosec.exchange shared this topic
-
@jenniferplusplus As too-online millennials would say: “x to doubt”.
Or, more politely: “extraordinary claims require extraordinary evidence”.
also ... who says so?
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus I seriously doubt this is smoke and mirrors, recent models have improved significantly for cybersec and the industry is noticing:
daniel:// stenberg:// (@bagder@mastodon.social)
The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense.
Mastodon (mastodon.social)
Linux kernel czar says AI bug reports aren't slop anymore
Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away
(www.theregister.com)
The industry consensus seems to be that there's going to be a torrent of vulnerabilities being found in all sorts of software, and they're not prepared to handle the blast radius. It's not surprising that Anthropic wants to give a select few a head start to tackle them. It would be nice if their token fund was open to all OSS projects to apply.
I'm also pressing "X doubt" that you spend months coordinating between AWS, Apple, Microsoft, Google, and the Linux Foundation to organise this just because your tool's code leaked online.
-
@jenniferplusplus I seriously doubt this is smoke and mirrors, recent models have improved significantly for cybersec and the industry is noticing:
daniel:// stenberg:// (@bagder@mastodon.social)
The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense.
Mastodon (mastodon.social)
Linux kernel czar says AI bug reports aren't slop anymore
Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away
(www.theregister.com)
The industry consensus seems to be that there's going to be a torrent of vulnerabilities being found in all sorts of software, and they're not prepared to handle the blast radius. It's not surprising that Anthropic wants to give a select few a head start to tackle them. It would be nice if their token fund was open to all OSS projects to apply.
I'm also pressing "X doubt" that you spend months coordinating between AWS, Apple, Microsoft, Google, and the Linux Foundation to organise this just because your tool's code leaked online.
@budududuroiu @jenniferplusplus I wouldn't give Anthropic's motives a lot of credit here but LLMs do make bug hunting much easier.
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus First thought I had when I read about this was “how is *Anthropic* a credible source for this?”
-
@jenniferplusplus I seriously doubt this is smoke and mirrors, recent models have improved significantly for cybersec and the industry is noticing:
daniel:// stenberg:// (@bagder@mastodon.social)
The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense.
Mastodon (mastodon.social)
Linux kernel czar says AI bug reports aren't slop anymore
Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away
(www.theregister.com)
The industry consensus seems to be that there's going to be a torrent of vulnerabilities being found in all sorts of software, and they're not prepared to handle the blast radius. It's not surprising that Anthropic wants to give a select few a head start to tackle them. It would be nice if their token fund was open to all OSS projects to apply.
I'm also pressing "X doubt" that you spend months coordinating between AWS, Apple, Microsoft, Google, and the Linux Foundation to organise this just because your tool's code leaked online.
@budududuroiu @jenniferplusplus some people have published numbers or noticed "a significant increase in quality" but none of these things bear any scientific rigor. My guess is that the one huge trick anthropic pulled was merely a bigger context window. Sure, that tends to give more context-related (not "true" or "accurate") results (duh!) but it's hardly revolutionary. LLMs are still statistical models doing fancy autocomplete & they know nothing about the world, I'll hold my breath
-
@budududuroiu @jenniferplusplus I wouldn't give Anthropic's motives a lot of credit here but LLMs do make bug hunting much easier.
@mirth That's fair, I do personally believe that Anthropic is more ideologically driven than most frontier AI labs, and they genuinely believe in the need to gatekeep Mythos. Sometimes that manifests itself as sniffing too many of your own farts.
-
@jenniferplusplus I seriously doubt this is smoke and mirrors, recent models have improved significantly for cybersec and the industry is noticing:
daniel:// stenberg:// (@bagder@mastodon.social)
The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense.
Mastodon (mastodon.social)
Linux kernel czar says AI bug reports aren't slop anymore
Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away
(www.theregister.com)
The industry consensus seems to be that there's going to be a torrent of vulnerabilities being found in all sorts of software, and they're not prepared to handle the blast radius. It's not surprising that Anthropic wants to give a select few a head start to tackle them. It would be nice if their token fund was open to all OSS projects to apply.
I'm also pressing "X doubt" that you spend months coordinating between AWS, Apple, Microsoft, Google, and the Linux Foundation to organise this just because your tool's code leaked online.
@budududuroiu the same people would tell you the "industry consensus" among the rest of tech is that chatbots made programming dramatically more productive. The reality is that they mostly automate the creation of those same bugs and vulnerabilities
So, you know
Maybe wake me up when they're organizing this thing with someone who's not in the same trillion dollar hole as them
-
@budududuroiu the same people would tell you the "industry consensus" among the rest of tech is that chatbots made programming dramatically more productive. The reality is that they mostly automate the creation of those same bugs and vulnerabilities
So, you know
Maybe wake me up when they're organizing this thing with someone who's not in the same trillion dollar hole as them
@jenniferplusplus Finding problems vs. fixing them are two different bags of burritos. Zero days aren't valuable because they're so complex or unique, they're valuable because there have been zero days to fix them. I think AI coding is pretty trash, but AI debugging is very good.
https://mastodon.social/@bagder/116340130146901164
Anyways, wake up, they're organising this thing with someone not in the same trillion dollar hole as them: https://www.linuxfoundation.org/blog/project-glasswing-gives-maintainers-advanced-ai-to-secure-open-source
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus I would like to remind everyone that Misanthropic and that little bitch Claude are among the worst actors out there, because it's a cult. An amoral, do-anything-to-win cult that actually believes they are building "sentient life". Which is totally insane. https://www.404media.co/anthropic-exec-forces-ai-chatbot-on-gay-discord-community-members-flee/
-
@the_decryptor I think so too, but I think that's the effective way to use LLMs, like a "magic" glue that can tie together or stack processes like Legos.
I think they also mentioned this in the blog, related to Mythos being capable enough to chain together tools AND vulnerabilities to achieve objectives.
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus "Our new model is too dangerous for the public, we couldn't possibly release it! Anyway, you can subscribe to it for $150 a month."
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus any presumed competence on the behalf of an AI company is typically the work of impoverished humans in South Asian or South East Asia.
-
@jenniferplusplus As too-online millennials would say: “x to doubt”.
Or, more politely: “extraordinary claims require extraordinary evidence”.
@younata @jenniferplusplus That last one was Carl Sagan. I have @emilymbender 's and @Katecrawford 's books on my table to read in the abundant free time I never have now
-
@jenniferplusplus Finding problems vs. fixing them are two different bags of burritos. Zero days aren't valuable because they're so complex or unique, they're valuable because there have been zero days to fix them. I think AI coding is pretty trash, but AI debugging is very good.
https://mastodon.social/@bagder/116340130146901164
Anyways, wake up, they're organising this thing with someone not in the same trillion dollar hole as them: https://www.linuxfoundation.org/blog/project-glasswing-gives-maintainers-advanced-ai-to-secure-open-source
@budududuroiu yes, I noticed when you included them the first time. The Linux Foundation is a clearing house for coordination between everyone else on that list. They don't even consider kernel maintenance or distribution to be within the scope of their interests. They don't do what most people imagine they do

-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
A couple people seem very invested in me being wrong about this assessment. All I can say is that this would be the first time I have misclassified an AI claim as bullshit
-
There's one very important thing I would like everyone to try to remember this week, and it is that AI companies are full of shit
Only rarely do their claims actually bear scrutiny, and those are only the mildest of claims they make.
So, anthropic is claiming that their new, secret, unreleased model is hyper competent at finding computer security vulnerabilities and they're *too scared* to release it into the wild.
Except all the AI companies have been making the same hypercompetence claims about literally every avenue of knowledge work for 3+ years, and it's literally never true. So please keep in mind the highly likely possibility that this is mostly or entirely bullshit marketing meant to distract you from the absolute garbage fire that is the code base of the poster child application for "agentically" developed software
You may now resume doom scrolling. Thank you
@jenniferplusplus Literally seconds ago I wrote elsewhere: "first rule of LLMs: If someone from an LLM company says their model can do x, it can't do x, but it includes some thoughts and prayers to please do x."