When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant Well we do have humans carelessly accepting AI submits without an review: one could consider them an even weaker chain.
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant It's still kind of a human's fault for installing that weak link. The weakest link are the c-suite making terrible decisions.
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant okay, now the weakest link is the human who decided "I think I'll outsource my work to a dumbass who's wrong about everything."
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant now the weakest link is the human who decided to implement AI.
So what's changed? -
@EndlessMason@hachyderm.io @neurovagrant@masto.deoan.org
Running Qwen3.5 on my 7900xtx eats as much power as running any video game. I have zero issue with running LLMs locally to assist with my journals/ notes. Nothing compared to a data center.@phil @neurovagrant @EndlessMason similar experience. humans can drive these models if they have a decent engineering/security understanding. i've got no issue with leveraging it to offload tedious tasks and operational burden.
but to your point on the human factor, there's been a lot of footgunning lately. even with principal staff getting lazy.
running models on a ada4000-20gb works pretty nicely and way less power use than a dc or some 5090 monster i need a new circuit for
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant it still is the human. They just changed how they break things. Instead of breaking things themselves they trust a machine that does it.
-
@phil @neurovagrant @EndlessMason similar experience. humans can drive these models if they have a decent engineering/security understanding. i've got no issue with leveraging it to offload tedious tasks and operational burden.
but to your point on the human factor, there's been a lot of footgunning lately. even with principal staff getting lazy.
running models on a ada4000-20gb works pretty nicely and way less power use than a dc or some 5090 monster i need a new circuit for
@jae@mastodon.bsd.cafe @neurovagrant@masto.deoan.org @EndlessMason@hachyderm.io
I just give the LLM some tools to read my journals, and then type my notes into my note git repo in a separate place.
https://codeberg.org/bajsicki/gptel-got
I've a bunch of re-writes locally, but they're not ready to be out in public yet until I test more and gain confidence. -
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
Thank you to everyone saying "it's still the human."
No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.
This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
Turns out the weakest link was just waiting for a better prompt.
-
@jae@mastodon.bsd.cafe @neurovagrant@masto.deoan.org @EndlessMason@hachyderm.io
I just give the LLM some tools to read my journals, and then type my notes into my note git repo in a separate place.
https://codeberg.org/bajsicki/gptel-got
I've a bunch of re-writes locally, but they're not ready to be out in public yet until I test more and gain confidence.@phil @neurovagrant @EndlessMason that's really clever. i had a pile of links from the last 2 years. dedupe + sort + relevance tagging took ~10 minutes which would have taken me a frustrating couple of days.
i like how you're clear on the disclaimer. i've seen others tout their tool as "military-grade secure" and i fall back out of my chair
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
It's still a human, it's just shifted to the decision-making ones that mandate use of these systems.
-
Thank you to everyone saying "it's still the human."
No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.
This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.
I'd say it's still a human. But it's not the user, it's the product deployer.
In my worldview, responsibility always, and only, lands on humans
-
Thank you to everyone saying "it's still the human."
No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.
This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.
@neurovagrant one of these days I need to sit down and write a blog post about how I have a blade that is cheap as hell, but more safe than any other blade I’ve owned, and how that relates to… everything.
-
Thank you to everyone saying "it's still the human."
No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.
This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.
@neurovagrant How is that not still the human? Didn't humans decide to let AI run entire systems without anyone watching.
FFS, Tencent's shares just skyrocketed for saying their deploying OpenClaw which is _known_ to be destructive and have massive security vulnerabilities. -
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant and yet arguably the weakest point is still the human that decided to slopcode
-
@cR0w @neurovagrant "Stop, OpenCaw!"
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant I mean it's still true. The weakest link is now the human that involves the LLM in the chain.
-
Thank you to everyone saying "it's still the human."
No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.
This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.
@neurovagrant Why do you surrender agency so readily?
We are and remain masters of our world.
So much of the slopocalypse is shitty CEOs catering to dumb investors who arrogantly yet wrongfully think they know a damn thing about IT. All a very (if deplorably) human thing.
That said, your post is funny and I like it a lot.
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
The weakest link is the human who signed off on the LLM
-
@neurovagrant Why do you surrender agency so readily?
We are and remain masters of our world.
So much of the slopocalypse is shitty CEOs catering to dumb investors who arrogantly yet wrongfully think they know a damn thing about IT. All a very (if deplorably) human thing.
That said, your post is funny and I like it a lot.
@renardboy @neurovagrant no way. Nobody back home is going to believe me when I tell them I saw an actual bus