For those getting questions about Glasswing from their executives, give them this article.
-
@Sempf @cR0w "We have a different question. When did zero days become the threat you were supposed to be worried about?"
I mean, yeah, but also just because somebody is doing the basics poorly does not mean that advanced techniques are not *also* a threat. Many threats simultaneously! And some of them just became more risky and easier for attackers to leverage ...
@darkuncle @Sempf Easier for attackers means a potentially higher likelihood of occurrence, but it does not change the severity of impact. And while the likelihood does theoretically impact the risk score, for at least some orgs, it's minimal to no change when your adversaries are at the top of the field already. The rising tide of AI may be lifting all attackers' boats, but the high water mark remains the same, despite the industry continuously claiming a tsunami is coming. I just don't see it.
-
@darkuncle @Sempf Easier for attackers means a potentially higher likelihood of occurrence, but it does not change the severity of impact. And while the likelihood does theoretically impact the risk score, for at least some orgs, it's minimal to no change when your adversaries are at the top of the field already. The rising tide of AI may be lifting all attackers' boats, but the high water mark remains the same, despite the industry continuously claiming a tsunami is coming. I just don't see it.
@cR0w @darkuncle @Sempf if the first question asked isn't "where's the proof" then people aren't doing their jobs. And Anthropic shitting their pants on command is not proof. They claimed GPT2 was 'too dangerous to release.'
So where's the proof?
An un-exploitable bogus OpenBSD bug that was only validated by themselves?
A research paper they wrote with Claude with a whole lot of fabricated crap?Where. Is. The. Proof?
Answer: there is none and never will be.
-
@darkuncle @Sempf Easier for attackers means a potentially higher likelihood of occurrence, but it does not change the severity of impact. And while the likelihood does theoretically impact the risk score, for at least some orgs, it's minimal to no change when your adversaries are at the top of the field already. The rising tide of AI may be lifting all attackers' boats, but the high water mark remains the same, despite the industry continuously claiming a tsunami is coming. I just don't see it.
@cR0w @darkuncle You should start a blog. Oh, wait.
-
@cR0w @darkuncle You should start a blog. Oh, wait.
-
@cR0w @darkuncle @Sempf if the first question asked isn't "where's the proof" then people aren't doing their jobs. And Anthropic shitting their pants on command is not proof. They claimed GPT2 was 'too dangerous to release.'
So where's the proof?
An un-exploitable bogus OpenBSD bug that was only validated by themselves?
A research paper they wrote with Claude with a whole lot of fabricated crap?Where. Is. The. Proof?
Answer: there is none and never will be.
@rootwyrm @cR0w @darkuncle I believe that is exactly correct. As I mentioned somewhere, open up a developer console on any browser on any website of any size and significance, and you'll see 7,000 vulnerabilities in the JavaScript. Absolutely none of them are exploitable for anything useful at all. They don't really matter, and I would imagine that 99.997% of the things that are showing up in this magic report are going to be exactly like that.
-
@rootwyrm @cR0w @darkuncle I believe that is exactly correct. As I mentioned somewhere, open up a developer console on any browser on any website of any size and significance, and you'll see 7,000 vulnerabilities in the JavaScript. Absolutely none of them are exploitable for anything useful at all. They don't really matter, and I would imagine that 99.997% of the things that are showing up in this magic report are going to be exactly like that.
@Sempf @rootwyrm @darkuncle If even that.
-
@Sempf @rootwyrm @darkuncle If even that.
@cR0w @rootwyrm @darkuncle But out of curiosity, are you getting questions from your management? None of my clients have said a word one, and several of them are very AI focused.
-
@cR0w @rootwyrm @darkuncle But out of curiosity, are you getting questions from your management? None of my clients have said a word one, and several of them are very AI focused.
@Sempf @rootwyrm @darkuncle I did for a while, but then they found out I'm skeptical but back up my skepticism when asked so they stopped asking me for the most part. They ask the AI fans now.
-
@cR0w @rootwyrm @darkuncle But out of curiosity, are you getting questions from your management? None of my clients have said a word one, and several of them are very AI focused.
@Sempf @cR0w @darkuncle I'm one of the many in the ranks of the funemployed. But I'm definitely seeing a whole lot of gnashing of teeth and sky-is-falling shit from both management and from people who have absolutely no excuse for buying into LLM generated bullshit.
-
@cR0w @Sempf @darkuncle I feel attacked.
-
@darkuncle @Sempf Easier for attackers means a potentially higher likelihood of occurrence, but it does not change the severity of impact. And while the likelihood does theoretically impact the risk score, for at least some orgs, it's minimal to no change when your adversaries are at the top of the field already. The rising tide of AI may be lifting all attackers' boats, but the high water mark remains the same, despite the industry continuously claiming a tsunami is coming. I just don't see it.
-
R relay@relay.infosec.exchange shared this topic