nobody confident in their own abilities is panicking
-
It is basically already like that, I think Vernor Vinge got it right. If we are still around ages from now it will be layers and layers of legacy code no one understands all the way down.
It is a very interesting thought process for someone who is in the depths of software development in big tech to really plan out what such a long term migration would look like just for one company. Once you get it, it is very humbling to realize how hard it really is.
I am also read "Thinking in Systems" and it is a good book to read if you are thinking about this kind of stuff:
Shafik Yaghmour (@shafik@hachyderm.io)
Reading “Thinking in Systems” “Purposes are deduced from behavior, not from rhetoric or stated goals” We often get stuck on what is said and forget to look at results. If the results never match what is said then you need to realize maybe what is said is meant to mislead. It could also be lack of skills but that is not much better.
Hachyderm.io (hachyderm.io)
-
@cR0w @jackryder @Viss @da_667 Because it’s easier to support if everything is installed and turned on by default. You don’t get pesky users calling saying, “Why isn’t this working?” Fewer support calls saves money.
We were fighting this battle in the OS during my Center for Internet Security days back in the early 2000s and made some progress as far as default installs. But entropy is gonna entropy.
@hal_pomeranz @jackryder @Viss @da_667 Fair. We're so far into it that it's almost impossible to fight that one with a reasonable amount of resources. But it's still frustrating. It's the part of the whole "microservices" or "serverless" that I like. You don't have to inherit a bunch of dependencies and vulnerabilities the same way you would if you had to spin up a Windows or RHEL or Ubuntu machine just for your simple needs.
-
nobody confident in their own abilities is panicking
Infosec community panics over Anthropic Claude Code Security
ai-pocalypse: Not the first of its kind
(www.theregister.com)
the people who are panicking are signaling.
@Viss oh great, so this hyperactive, severely ADHD, junior intern who requires very detailed instructions to do anything useful and still promptly forgets their own name and what they were doing every 15 minutes is going to replace me?
I'm not panicking. I'm laughing. A lot.
-
nobody confident in their own abilities is panicking
Infosec community panics over Anthropic Claude Code Security
ai-pocalypse: Not the first of its kind
(www.theregister.com)
the people who are panicking are signaling.
@Viss Yeah, as a security-minded devops engineer, this is dope. (Well, y'know, aside from all the general ethical/environmental/etc. concerns about LLM use.) Having more "eyes" out looking for security vulnerabilities is a good thing, and especially so when one set of "eyes" is biased in a different way than typical human reviewers and thus is well placed to notice some subset of problems that humans would probably miss.
Of course, that only applies as long as it's used sensibly. Which means using LLMs to report issues for human review and validation, not letting an agent loose on a code base with the ability to automatically file security reports for anything it finds. (I have little confidence that the tool will actually be used sensibly in most cases.)
-
nobody confident in their own abilities is panicking
Infosec community panics over Anthropic Claude Code Security
ai-pocalypse: Not the first of its kind
(www.theregister.com)
the people who are panicking are signaling.
@Viss Not looking forward to someone running this, thinking everything is all kosher to load, and then taking down a quarter of the internet.
-
@Viss oh great, so this hyperactive, severely ADHD, junior intern who requires very detailed instructions to do anything useful and still promptly forgets their own name and what they were doing every 15 minutes is going to replace me?
I'm not panicking. I'm laughing. A lot.
@0xtero in the spirit of laughing a lot, i just spent like two hours swapping gpus with my desktop and gaming rig so that i can run ollama with some decent model, so that i can light up some incus containers and fuck around with weird agentic bullshit and fake mcp servers in order to do the research for the talk i submit to securityfest

soon you will be laughing at me too!
-
@Viss Not looking forward to someone running this, thinking everything is all kosher to load, and then taking down a quarter of the internet.
@catscatscats time to selfhost everything you possibly can

-
@Viss oh great, so this hyperactive, severely ADHD, junior intern who requires very detailed instructions to do anything useful and still promptly forgets their own name and what they were doing every 15 minutes is going to replace me?
I'm not panicking. I'm laughing. A lot.
-
@Viss Yeah, as a security-minded devops engineer, this is dope. (Well, y'know, aside from all the general ethical/environmental/etc. concerns about LLM use.) Having more "eyes" out looking for security vulnerabilities is a good thing, and especially so when one set of "eyes" is biased in a different way than typical human reviewers and thus is well placed to notice some subset of problems that humans would probably miss.
Of course, that only applies as long as it's used sensibly. Which means using LLMs to report issues for human review and validation, not letting an agent loose on a code base with the ability to automatically file security reports for anything it finds. (I have little confidence that the tool will actually be used sensibly in most cases.)
@diazona you should be aware that i am actively working on research that intends to measure just how often llms lie about shit, even when using skills and mcp servers, because at the end of the day, no matter what layers you put on top of an llm, it still fucking lies and hallucinates - even when its told to use skills and mcp servers
so.. your sentiment, while optimistic, makes the assumption "that this shit works"
but .. it doesnt.
at least not with enough precision to be relied upon -
-
nobody confident in their own abilities is panicking
Infosec community panics over Anthropic Claude Code Security
ai-pocalypse: Not the first of its kind
(www.theregister.com)
the people who are panicking are signaling.
@Viss I think I would panic if this were my role - but mostly because of a general "AI" problem, which is that it eliminates tasks needed to give new people experience and ways to grow in to their role
-
-
@Viss I think I would panic if this were my role - but mostly because of a general "AI" problem, which is that it eliminates tasks needed to give new people experience and ways to grow in to their role
@Viss (admittedly I'm also not at all confident in my ability, except for the brief moments I have to deal some of the stuff actual vendors ship to actual customers, but that's another story)
-
@krypt3ia @Viss sure I get the business side of of it - from the quarterly reporting point of view - but I'm also pretty sure that the investment costs and real running costs for these models will, at some point, be transferred to their customers. So in the end, they might actually end up being more expensive than I am.
But of course, the quarterly reporting model doesn't care about that.
-
@Viss I think I would panic if this were my role - but mostly because of a general "AI" problem, which is that it eliminates tasks needed to give new people experience and ways to grow in to their role
@Namnatulco assuming it works
-
-
nobody confident in their own abilities is panicking
Infosec community panics over Anthropic Claude Code Security
ai-pocalypse: Not the first of its kind
(www.theregister.com)
the people who are panicking are signaling.
@Viss Having used some static code analyzers in the past, I have to honestly wonder if it can be worse than current ones.
The ones I've used were a festival of false positives to the point of being almost worthless.
(and I am not for using AI in any way...it's just they were that bad...)
-
@krypt3ia @Viss sure I get the business side of of it - from the quarterly reporting point of view - but I'm also pretty sure that the investment costs and real running costs for these models will, at some point, be transferred to their customers. So in the end, they might actually end up being more expensive than I am.
But of course, the quarterly reporting model doesn't care about that.
-
@Viss Having used some static code analyzers in the past, I have to honestly wonder if it can be worse than current ones.
The ones I've used were a festival of false positives to the point of being almost worthless.
(and I am not for using AI in any way...it's just they were that bad...)
@zombie042 well its a mixed bag. It is measurably useful and it does actually find stuff - but if you cannot tell yourself that what its showing you is bullshit, theres no way to tell the wheat from the chaff. so unless these things are being driven by people who can tell, shits gonna get ugly really fast
-