So, something that's been bugging the shit out of me?
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin I absolutely agree. It isn’t the most egregious case I’ve seen though, at least they didn’t get it to write an apology letter to them and imagine that it had learned something from the event / letter. That one broke me.
I’m not worried about the machines getting smarter, I’m worried about the people doing the opposite.
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin apparently someone is trying to train up an AI therapist model too https://www.proofnews.org/womans-talkspace-therapy-app-sessions-exposed-in-court/
(That said,
on your post) -
@munin apparently someone is trying to train up an AI therapist model too https://www.proofnews.org/womans-talkspace-therapy-app-sessions-exposed-in-court/
(That said,
on your post)Multiple. And the lack of confidentiality is a huge fucking issue, and the lack of HIPAA compliance is another, and this is purely harming people.
-
@munin@infosec.exchange To be honest, I have some amount of sympathy for this behaviour. This is someone who put their trust in something they were told they could trust, and has been characterised in a way such that they believe it can reason. When they then have their expectations subverted, they query for its reasoning, not understanding that it doesn't have this. It's more sad, like trying to reach for connection and reason where there is none.
The problem here isn't overt, intentional ignorance, but people being misled and struggling with a technology that fakes connection and reasoning. Rather than being angry at them, I feel sad for them. We should invest significant effort in tech literacy so that people understand why they shouldn't trust these things, which will inherently reduce, if not totally eradicate, their reliance on this technology. Dismissing their actions as stupid or malicious in the meantime only sharpens the wedge between people who understand why these things must not be used or trusted, and those who do use and trust them.I do not give a shit about whatever excuses these assholes puke out.
They made a series of considered choices that caused significant harm. There were ample opportunities to avoid this and they chose to continue.
I do not care and I wish them as much pain as they can handle.
-
@munin they can get a snazzy text of what an AI would sound like if it had conscience though.
I do wonder if they actually double checked if what the AI told them is actually correct.
Fucking doubtful. Not a one of these slackasses ever cites "checking syslog"
-
@munin I absolutely agree. It isn’t the most egregious case I’ve seen though, at least they didn’t get it to write an apology letter to them and imagine that it had learned something from the event / letter. That one broke me.
I’m not worried about the machines getting smarter, I’m worried about the people doing the opposite.
The latter has absolutely already occurred; LLM usage is causing massive de-skilling across the industry.
-
I do not give a shit about whatever excuses these assholes puke out.
They made a series of considered choices that caused significant harm. There were ample opportunities to avoid this and they chose to continue.
I do not care and I wish them as much pain as they can handle.
@munin@infosec.exchange What does that stance accomplish? If anything, this seems like a great position by which to further alienate ourselves as fringe "AI skeptics".
The reality is that the people with the greatest knowledge of how these systems work and how they were made (outside the scammers which sell them, of course) use these tools the least. If we take the time to teach people why not to use them and give them alternatives, we will actually reduce the harms. Otherwise, we will just be angry contrarians to be ignored. -
@munin@infosec.exchange What does that stance accomplish? If anything, this seems like a great position by which to further alienate ourselves as fringe "AI skeptics".
The reality is that the people with the greatest knowledge of how these systems work and how they were made (outside the scammers which sell them, of course) use these tools the least. If we take the time to teach people why not to use them and give them alternatives, we will actually reduce the harms. Otherwise, we will just be angry contrarians to be ignored.Buddy, if you think I'm looking to win over -the fucking nimrods who are refusing to do basic diligence for their customers- then you have me completely fucking misunderstood.
-
Buddy, if you think I'm looking to win over -the fucking nimrods who are refusing to do basic diligence for their customers- then you have me completely fucking misunderstood.
@munin@infosec.exchange My point is that the best way to get those customers their diligence is to actually start helping people, not directing your anger at people who are also victims of the scams of the AI industry. If your objective is just to be angry, then be angry! Just make sure it's at the right people.
-
@munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.
-
@munin@infosec.exchange My point is that the best way to get those customers their diligence is to actually start helping people, not directing your anger at people who are also victims of the scams of the AI industry. If your objective is just to be angry, then be angry! Just make sure it's at the right people.
Perhaps before you lecture the woman who has made a career off of helping people stay safe in the face of malicious assholes, you might consider that her anger is well-targeted and specifically informed by her experiences.
-
@munin apparently someone is trying to train up an AI therapist model too https://www.proofnews.org/womans-talkspace-therapy-app-sessions-exposed-in-court/
(That said,
on your post) -
@munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.
-
@petealexharris @munin Clearly you've never tried it yourself...
-
@munin@infosec.exchange To be honest, I have some amount of sympathy for this behaviour. This is someone who put their trust in something they were told they could trust, and has been characterised in a way such that they believe it can reason. When they then have their expectations subverted, they query for its reasoning, not understanding that it doesn't have this. It's more sad, like trying to reach for connection and reason where there is none.
The problem here isn't overt, intentional ignorance, but people being misled and struggling with a technology that fakes connection and reasoning. Rather than being angry at them, I feel sad for them. We should invest significant effort in tech literacy so that people understand why they shouldn't trust these things, which will inherently reduce, if not totally eradicate, their reliance on this technology. Dismissing their actions as stupid or malicious in the meantime only sharpens the wedge between people who understand why these things must not be used or trusted, and those who do use and trust them.@addison @munin most of the discourse around this skips over the core problem: a hosting company encouraged its users to use AI to manage their servers, but stored staging and prod and their backups all on one volume, and allowed deletion of that volume without confirmation or warning.
Schadenfreude aside, that's devops/ui/ux incompetence whether the operator at the controls is human or not. Deleting the staging database shouldn't delete the prod backups.
-
@petealexharris @munin Where did I say they'd understand a better prompt?
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin LLMs exploits the weakest flaw in human race: the tendency to anthropomorphize anything. It works so well that llm users dont even notice.
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin is this a copypasta -
@petealexharris @munin When you ask an LLM "why is the sky blue?", it is statistically likely to give a correct answer. It still works the same way, computing probabilities of what the next token is, but the "why" has a semantically significant weight that influences the output, so it is an important keyword. It doesn't have to "understand" it, it just has to be trained in a way that makes it significant. You don't have to believe that it understand things to know that it is trained on human language and will behave correctly when fed human language.
-
@munin@infosec.exchange To be honest, I have some amount of sympathy for this behaviour. This is someone who put their trust in something they were told they could trust, and has been characterised in a way such that they believe it can reason. When they then have their expectations subverted, they query for its reasoning, not understanding that it doesn't have this. It's more sad, like trying to reach for connection and reason where there is none.
The problem here isn't overt, intentional ignorance, but people being misled and struggling with a technology that fakes connection and reasoning. Rather than being angry at them, I feel sad for them. We should invest significant effort in tech literacy so that people understand why they shouldn't trust these things, which will inherently reduce, if not totally eradicate, their reliance on this technology. Dismissing their actions as stupid or malicious in the meantime only sharpens the wedge between people who understand why these things must not be used or trusted, and those who do use and trust them.@addison I agree with you. Which is not to say we should forgive what happened (I don't have the complete context but it sounds like something bad to do with production customers) but that we should understand where the people who did this came from
My view *might* be partially influenced by a quote from this piece on "The Rise and Fall of Petty Tyrants" (quote in next message)

The Rise & Fall Of ‘Petty Tyrants’ | NOEMA
History shows that bad leaders can successfully undermine democracy — but the story always ends the same way.
NOEMA (www.noemamag.com)
