When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
Turns out the weakest link was just waiting for a better prompt.
-
@jae@mastodon.bsd.cafe @neurovagrant@masto.deoan.org @EndlessMason@hachyderm.io
I just give the LLM some tools to read my journals, and then type my notes into my note git repo in a separate place.
https://codeberg.org/bajsicki/gptel-got
I've a bunch of re-writes locally, but they're not ready to be out in public yet until I test more and gain confidence.@phil @neurovagrant @EndlessMason that's really clever. i had a pile of links from the last 2 years. dedupe + sort + relevance tagging took ~10 minutes which would have taken me a frustrating couple of days.
i like how you're clear on the disclaimer. i've seen others tout their tool as "military-grade secure" and i fall back out of my chair
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
It's still a human, it's just shifted to the decision-making ones that mandate use of these systems.
-
Thank you to everyone saying "it's still the human."
No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.
This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.
I'd say it's still a human. But it's not the user, it's the product deployer.
In my worldview, responsibility always, and only, lands on humans
-
Thank you to everyone saying "it's still the human."
No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.
This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.
@neurovagrant one of these days I need to sit down and write a blog post about how I have a blade that is cheap as hell, but more safe than any other blade I’ve owned, and how that relates to… everything.
-
Thank you to everyone saying "it's still the human."
No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.
This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.
@neurovagrant How is that not still the human? Didn't humans decide to let AI run entire systems without anyone watching.
FFS, Tencent's shares just skyrocketed for saying their deploying OpenClaw which is _known_ to be destructive and have massive security vulnerabilities. -
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant and yet arguably the weakest point is still the human that decided to slopcode
-
@cR0w @neurovagrant "Stop, OpenCaw!"
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant I mean it's still true. The weakest link is now the human that involves the LLM in the chain.
-
Thank you to everyone saying "it's still the human."
No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.
This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.
@neurovagrant Why do you surrender agency so readily?
We are and remain masters of our world.
So much of the slopocalypse is shitty CEOs catering to dumb investors who arrogantly yet wrongfully think they know a damn thing about IT. All a very (if deplorably) human thing.
That said, your post is funny and I like it a lot.
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
The weakest link is the human who signed off on the LLM
-
@neurovagrant Why do you surrender agency so readily?
We are and remain masters of our world.
So much of the slopocalypse is shitty CEOs catering to dumb investors who arrogantly yet wrongfully think they know a damn thing about IT. All a very (if deplorably) human thing.
That said, your post is funny and I like it a lot.
@renardboy @neurovagrant no way. Nobody back home is going to believe me when I tell them I saw an actual bus
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
... The "Leader-shit" team that went all in on LLM's?
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
@neurovagrant I took love how we have made computer suspectable to social engineering.
Great job all around guys
(Sarcastic)
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
Its crazy how little of in issue it would be if
1) AI CEOs weren't greedy about training data. So the bots wouldnt siphon corprate and private data to use as training data.
2) Openai wouldn't have a feature to make chats visible on the internet.
3) Microsoft didn't make a folder filled with screenshots of EVERYTHING YOUVE EVER DONE.
And most importantly
4) We stopped giving LLMs full fucking access to our computers, networks, and credit card information.
Like there's absolutely no reason for them to be such a security risk. These are all things that if they just asked one person who isn't sniffing a Tech CEOs farts all day their opinion.
Now we have assholes like Pete Hegseth trying to super glue ChatGPT to a tomahawk missile!
-
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
-
@EndlessMason@hachyderm.io @neurovagrant@masto.deoan.org As a sidenote, I've seen things you wouldn't believe in the last few months that has me genuinely convinced that it's humans that made LLMs look bad, rather than LLMs being bad intrinsically (aside from the copyright issues, power drain, freshwater use, global warming, financial abuse, privacy issues, deals with government...).
The math models (locally hosted, fitting on gaming GPUs) can be fairly easily be made useful and helpful (a few days of effort after work) in menial tasks that can't be completed deterministically, provided basic oversight. They cost pennies, and they're private.@phil @neurovagrant @EndlessMason you have to be smart enough to do the job without AI to be able to use the current generation of AI effectively and safely.
But that's not how it's being sold, and that's not how executives see the situation
Which means this whole mess isn't an end user failure (oh, if only the end users were smarter and more attentive, BUT THEY"RE NOT)
It's a management failure (not understanding their workers, and not understanding the tools they are making their workers use).
