nobody confident in their own abilities is panicking
-
@Viss oh great, so this hyperactive, severely ADHD, junior intern who requires very detailed instructions to do anything useful and still promptly forgets their own name and what they were doing every 15 minutes is going to replace me?
I'm not panicking. I'm laughing. A lot.
-
@Viss Yeah, as a security-minded devops engineer, this is dope. (Well, y'know, aside from all the general ethical/environmental/etc. concerns about LLM use.) Having more "eyes" out looking for security vulnerabilities is a good thing, and especially so when one set of "eyes" is biased in a different way than typical human reviewers and thus is well placed to notice some subset of problems that humans would probably miss.
Of course, that only applies as long as it's used sensibly. Which means using LLMs to report issues for human review and validation, not letting an agent loose on a code base with the ability to automatically file security reports for anything it finds. (I have little confidence that the tool will actually be used sensibly in most cases.)
@diazona you should be aware that i am actively working on research that intends to measure just how often llms lie about shit, even when using skills and mcp servers, because at the end of the day, no matter what layers you put on top of an llm, it still fucking lies and hallucinates - even when its told to use skills and mcp servers
so.. your sentiment, while optimistic, makes the assumption "that this shit works"
but .. it doesnt.
at least not with enough precision to be relied upon -
-
nobody confident in their own abilities is panicking
Infosec community panics over Anthropic Claude Code Security
ai-pocalypse: Not the first of its kind
(www.theregister.com)
the people who are panicking are signaling.
@Viss I think I would panic if this were my role - but mostly because of a general "AI" problem, which is that it eliminates tasks needed to give new people experience and ways to grow in to their role
-
-
@Viss I think I would panic if this were my role - but mostly because of a general "AI" problem, which is that it eliminates tasks needed to give new people experience and ways to grow in to their role
@Viss (admittedly I'm also not at all confident in my ability, except for the brief moments I have to deal some of the stuff actual vendors ship to actual customers, but that's another story)
-
@krypt3ia @Viss sure I get the business side of of it - from the quarterly reporting point of view - but I'm also pretty sure that the investment costs and real running costs for these models will, at some point, be transferred to their customers. So in the end, they might actually end up being more expensive than I am.
But of course, the quarterly reporting model doesn't care about that.
-
@Viss I think I would panic if this were my role - but mostly because of a general "AI" problem, which is that it eliminates tasks needed to give new people experience and ways to grow in to their role
@Namnatulco assuming it works
-
-
nobody confident in their own abilities is panicking
Infosec community panics over Anthropic Claude Code Security
ai-pocalypse: Not the first of its kind
(www.theregister.com)
the people who are panicking are signaling.
@Viss Having used some static code analyzers in the past, I have to honestly wonder if it can be worse than current ones.
The ones I've used were a festival of false positives to the point of being almost worthless.
(and I am not for using AI in any way...it's just they were that bad...)
-
@krypt3ia @Viss sure I get the business side of of it - from the quarterly reporting point of view - but I'm also pretty sure that the investment costs and real running costs for these models will, at some point, be transferred to their customers. So in the end, they might actually end up being more expensive than I am.
But of course, the quarterly reporting model doesn't care about that.
-
@Viss Having used some static code analyzers in the past, I have to honestly wonder if it can be worse than current ones.
The ones I've used were a festival of false positives to the point of being almost worthless.
(and I am not for using AI in any way...it's just they were that bad...)
@zombie042 well its a mixed bag. It is measurably useful and it does actually find stuff - but if you cannot tell yourself that what its showing you is bullshit, theres no way to tell the wheat from the chaff. so unless these things are being driven by people who can tell, shits gonna get ugly really fast
-