For those getting questions about Glasswing from their executives, give them this article.
-
For those getting questions about Glasswing from their executives, give them this article.
While Everyone Watches Glasswing, Attackers Are Walking Through Your Front Door. - Aether AI
Aether AI's agents pressure test your attack surface continuously, across every attack vector, internally and externally. The same agents then dynamically generate the unique defensive signals required to protect your organisation at machine speed.
(tryaether.ai)
@Sempf Yeah, that's a no for me. My risk models remain unchanged.
He is right that AI gives us the catalyst and the tools.
-
@Sempf Yeah, that's a no for me. My risk models remain unchanged.
He is right that AI gives us the catalyst and the tools.
@cR0w When you say your risk models remain unchanged, does that mean you are not receiving pressure from management to change them due to Glass Wing, or that you already have and you're not changing them back?
-
@cR0w When you say your risk models remain unchanged, does that mean you are not receiving pressure from management to change them due to Glass Wing, or that you already have and you're not changing them back?
@Sempf @cR0w "We have a different question. When did zero days become the threat you were supposed to be worried about?"
I mean, yeah, but also just because somebody is doing the basics poorly does not mean that advanced techniques are not *also* a threat. Many threats simultaneously! And some of them just became more risky and easier for attackers to leverage ...
-
@cR0w When you say your risk models remain unchanged, does that mean you are not receiving pressure from management to change them due to Glass Wing, or that you already have and you're not changing them back?
@Sempf Not just Glasswing but every new AI hype comes to my team like it's some major new threat, but the only thing that seems to change is the scope and scale of individual attackers, not the state of the art. I have yet to see novel vulnerabilities or new attack paths discovered with any AI system yet. If it can only find a bunch of existing vuln classes, then it should already be addressed. If not, then the model was broken and now is a great time to update it. I don't see a difference between AI finding new things and APT69420 finding new things. Because they're not really that new. They haven't been for a while.
-
@Sempf @cR0w "We have a different question. When did zero days become the threat you were supposed to be worried about?"
I mean, yeah, but also just because somebody is doing the basics poorly does not mean that advanced techniques are not *also* a threat. Many threats simultaneously! And some of them just became more risky and easier for attackers to leverage ...
@darkuncle @Sempf Easier for attackers means a potentially higher likelihood of occurrence, but it does not change the severity of impact. And while the likelihood does theoretically impact the risk score, for at least some orgs, it's minimal to no change when your adversaries are at the top of the field already. The rising tide of AI may be lifting all attackers' boats, but the high water mark remains the same, despite the industry continuously claiming a tsunami is coming. I just don't see it.
-
@darkuncle @Sempf Easier for attackers means a potentially higher likelihood of occurrence, but it does not change the severity of impact. And while the likelihood does theoretically impact the risk score, for at least some orgs, it's minimal to no change when your adversaries are at the top of the field already. The rising tide of AI may be lifting all attackers' boats, but the high water mark remains the same, despite the industry continuously claiming a tsunami is coming. I just don't see it.
@cR0w @darkuncle @Sempf if the first question asked isn't "where's the proof" then people aren't doing their jobs. And Anthropic shitting their pants on command is not proof. They claimed GPT2 was 'too dangerous to release.'
So where's the proof?
An un-exploitable bogus OpenBSD bug that was only validated by themselves?
A research paper they wrote with Claude with a whole lot of fabricated crap?Where. Is. The. Proof?
Answer: there is none and never will be.
-
@darkuncle @Sempf Easier for attackers means a potentially higher likelihood of occurrence, but it does not change the severity of impact. And while the likelihood does theoretically impact the risk score, for at least some orgs, it's minimal to no change when your adversaries are at the top of the field already. The rising tide of AI may be lifting all attackers' boats, but the high water mark remains the same, despite the industry continuously claiming a tsunami is coming. I just don't see it.
@cR0w @darkuncle You should start a blog. Oh, wait.
-
@cR0w @darkuncle You should start a blog. Oh, wait.
-
@cR0w @darkuncle @Sempf if the first question asked isn't "where's the proof" then people aren't doing their jobs. And Anthropic shitting their pants on command is not proof. They claimed GPT2 was 'too dangerous to release.'
So where's the proof?
An un-exploitable bogus OpenBSD bug that was only validated by themselves?
A research paper they wrote with Claude with a whole lot of fabricated crap?Where. Is. The. Proof?
Answer: there is none and never will be.
@rootwyrm @cR0w @darkuncle I believe that is exactly correct. As I mentioned somewhere, open up a developer console on any browser on any website of any size and significance, and you'll see 7,000 vulnerabilities in the JavaScript. Absolutely none of them are exploitable for anything useful at all. They don't really matter, and I would imagine that 99.997% of the things that are showing up in this magic report are going to be exactly like that.
-
@rootwyrm @cR0w @darkuncle I believe that is exactly correct. As I mentioned somewhere, open up a developer console on any browser on any website of any size and significance, and you'll see 7,000 vulnerabilities in the JavaScript. Absolutely none of them are exploitable for anything useful at all. They don't really matter, and I would imagine that 99.997% of the things that are showing up in this magic report are going to be exactly like that.
@Sempf @rootwyrm @darkuncle If even that.
-
@Sempf @rootwyrm @darkuncle If even that.
@cR0w @rootwyrm @darkuncle But out of curiosity, are you getting questions from your management? None of my clients have said a word one, and several of them are very AI focused.
-
@cR0w @rootwyrm @darkuncle But out of curiosity, are you getting questions from your management? None of my clients have said a word one, and several of them are very AI focused.
@Sempf @rootwyrm @darkuncle I did for a while, but then they found out I'm skeptical but back up my skepticism when asked so they stopped asking me for the most part. They ask the AI fans now.
-
@cR0w @rootwyrm @darkuncle But out of curiosity, are you getting questions from your management? None of my clients have said a word one, and several of them are very AI focused.
@Sempf @cR0w @darkuncle I'm one of the many in the ranks of the funemployed. But I'm definitely seeing a whole lot of gnashing of teeth and sky-is-falling shit from both management and from people who have absolutely no excuse for buying into LLM generated bullshit.
-
@cR0w @Sempf @darkuncle I feel attacked.
-
@darkuncle @Sempf Easier for attackers means a potentially higher likelihood of occurrence, but it does not change the severity of impact. And while the likelihood does theoretically impact the risk score, for at least some orgs, it's minimal to no change when your adversaries are at the top of the field already. The rising tide of AI may be lifting all attackers' boats, but the high water mark remains the same, despite the industry continuously claiming a tsunami is coming. I just don't see it.
-
R relay@relay.infosec.exchange shared this topic