When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant i suspect we have two weak links now, great!
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant To err is human, but to *really* agree things up you need a computer.
-
@cR0w @neurovagrant
Or *was* it? <dramatic music> -
@cR0w @neurovagrant
Or *was* it? <dramatic music> -
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant Well we do have humans carelessly accepting AI submits without an review: one could consider them an even weaker chain.
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant It's still kind of a human's fault for installing that weak link. The weakest link are the c-suite making terrible decisions.
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant okay, now the weakest link is the human who decided "I think I'll outsource my work to a dumbass who's wrong about everything."
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant now the weakest link is the human who decided to implement AI.
So what's changed? -
@EndlessMason@hachyderm.io @neurovagrant@masto.deoan.org
Running Qwen3.5 on my 7900xtx eats as much power as running any video game. I have zero issue with running LLMs locally to assist with my journals/ notes. Nothing compared to a data center.@phil @neurovagrant @EndlessMason similar experience. humans can drive these models if they have a decent engineering/security understanding. i've got no issue with leveraging it to offload tedious tasks and operational burden.
but to your point on the human factor, there's been a lot of footgunning lately. even with principal staff getting lazy.
running models on a ada4000-20gb works pretty nicely and way less power use than a dc or some 5090 monster i need a new circuit for
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant it still is the human. They just changed how they break things. Instead of breaking things themselves they trust a machine that does it.
-
@phil @neurovagrant @EndlessMason similar experience. humans can drive these models if they have a decent engineering/security understanding. i've got no issue with leveraging it to offload tedious tasks and operational burden.
but to your point on the human factor, there's been a lot of footgunning lately. even with principal staff getting lazy.
running models on a ada4000-20gb works pretty nicely and way less power use than a dc or some 5090 monster i need a new circuit for
@jae@mastodon.bsd.cafe @neurovagrant@masto.deoan.org @EndlessMason@hachyderm.io
I just give the LLM some tools to read my journals, and then type my notes into my note git repo in a separate place.
https://codeberg.org/bajsicki/gptel-got
I've a bunch of re-writes locally, but they're not ready to be out in public yet until I test more and gain confidence. -
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
Thank you to everyone saying "it's still the human."
No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.
This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
Turns out the weakest link was just waiting for a better prompt.
-
@jae@mastodon.bsd.cafe @neurovagrant@masto.deoan.org @EndlessMason@hachyderm.io
I just give the LLM some tools to read my journals, and then type my notes into my note git repo in a separate place.
https://codeberg.org/bajsicki/gptel-got
I've a bunch of re-writes locally, but they're not ready to be out in public yet until I test more and gain confidence.@phil @neurovagrant @EndlessMason that's really clever. i had a pile of links from the last 2 years. dedupe + sort + relevance tagging took ~10 minutes which would have taken me a frustrating couple of days.
i like how you're clear on the disclaimer. i've seen others tout their tool as "military-grade secure" and i fall back out of my chair
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
It's still a human, it's just shifted to the decision-making ones that mandate use of these systems.
-
Thank you to everyone saying "it's still the human."
No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.
This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.
I'd say it's still a human. But it's not the user, it's the product deployer.
In my worldview, responsibility always, and only, lands on humans
-
Thank you to everyone saying "it's still the human."
No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.
This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.
@neurovagrant one of these days I need to sit down and write a blog post about how I have a blade that is cheap as hell, but more safe than any other blade I’ve owned, and how that relates to… everything.

