So, something that's been bugging the shit out of me?
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin Well... As you mentioned, each generation "reads the prompt and any cache, if they exist, from prior session", and since they were trained on "explaining" their previous outputs to sound like a relevant discussion, asking why a model gave a specific output isn't as stupid as you make it out to be, as it can give some input on the "thinking" part of the previous output that is usually not directly visible. That can then be used to tweak prompts and add some guardrails (even though there is a fairly long list of examples of guardrails not being fully effective). Of course, the first problem is giving access to prod to an unreliable system...
-
@munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin It seems to tie in which the trend that started awhile back (2016-2018) where people would respond to other people online by telling them to "go google it".
It came from a place of frustration with reply guys but I think there was a cultural effect of people being told to fuck off and stop asking actual people questions.
Curiosity deflected is often a social wound.
Communities that handled dumb questions well always felt like places of refuge. Things shifted and it opened the door.
-
@munin It seems to tie in which the trend that started awhile back (2016-2018) where people would respond to other people online by telling them to "go google it".
It came from a place of frustration with reply guys but I think there was a cultural effect of people being told to fuck off and stop asking actual people questions.
Curiosity deflected is often a social wound.
Communities that handled dumb questions well always felt like places of refuge. Things shifted and it opened the door.
@munin Also, not arguing against anything you've said.
Its been sad seeing different online places slowly give over the human side of the community to various resources that paved the way towards LLM dependence.
I've been online since the 90s and folks have always had shitty moments but the outsourcing of community knowledge to google did seem to prime folks who are more vulnerable in ways that LLMs are calibrated to cater to.
-
@munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
It's a cargo cult.
-
@crowbriarhexe @munin I had a sock puppet character that loved Brazilian steakhouse ("Fogo de Chão! MEAT ON SWORDS!") and was a huge proponent of self-betterment through community college. But that was me - the sock was just a vessel, a conduit.
-
R relay@relay.publicsquare.global shared this topicR relay@relay.mycrowd.ca shared this topic
-
I like the term "bullshit" here. The LLM produces a series of symbols with no connection to the concept of meaning, mearely to statistical frequency. Whether the output is accurate or not provides no insight, because there was no intent. It is effectively as close to a philosophical zombie as we have come.
Any other interpretation is anthropomorphism all the way down.
It might get things correct, but this is the same way random pictures of clocks may show the right time.
@theeclecticdyslexic @sinvega @munin
Another candidate is "confabulation" -
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin If I do s/LLM/CEO/ (vimspeak for replace LLM with CEO), and the CEO has a Pavlovian programmed response to ignore their own humanity and empathy in service of personal profit, is the result really any different?
There are days I think we'd all be better off with AI CEOs and union memberships, because as the union steward I could at least examine the model weight activation of why the AI did what it did.
In human CEOs, I can only infer what that motivation might have been, where in an open model with transparent training data, I can mathematically determine what caused a particular token to be generated.
Of course, in the current "state of the art" with proprietary models and training data sets, only the nation-state funded hackers really have the means and the motive to go inspecting why an LLM does what it does. Maybe that's not that much different than what the conspiracy theorists have been saying about the ruling class. -
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin tbh, these already were my thoughts when people started reporting they had "tricked" a chatbot into "spilling its instructions".
No. You successfully prompted the LLM into generating something that looks like instructions. They may be related to the actual system prompts and/or fine tuning data. But there's absolutely no guarantee. -
Also, putting as fine a fucking point on it as possible:
if you do not have the fucking logs to figure out what the LLM "did"
then you are incompetent and are not suited to build nor run the thing which you are trying to do.
Get good, asshole. Take the time to learn the fucking skills.
@munin this is part of what makes me feel generally uncomfortable around the LLM crowd.
You look up a course on creating “advanced” “AI” systems and its equivalent to the medieval Sargent with the tired expression trying to explain to the Young Lord that Yes The Troops Need to Eat No They Can’t Forage Yes Supply Chains are Vital No They Can’t Eat Their Horses.
-
@munin see also: "hallucinated". fucking STOP IT. It did not hallucinate anything it is not even remotely capable of thinking or imagining or understanding anything at all, ever, and never will be. It just regurgitates shit that looks statistically similar to the words you put in
I'm so tired of having to revise my expectations of people downwards AGAIN. I did not know a hole this deep was possible
@sinvega An alternative take is that the output of generative AI is *always* a "hallucination" *because by the widely used genAI-scope definition of that word, that's exactly what the software producing the output is designed to do*.
Whether the output happens to be correct or incorrect by some criteria is certainly not irrelevant when judging what *was* emitted, but that's a separate issue from *how* that output was generated.
And no this is not in support of genAI.
-
It does not help that the assholes shitting this toxin into the public sphere routinely and obviously lie about its capabilities.
And their latest, biggest lie is that the LLMs can improve themselves recursively.
Mathematics has proofs. Here's one... https://arxiv.org/html/2601.05280v2
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin@infosec.exchange To be honest, I have some amount of sympathy for this behaviour. This is someone who put their trust in something they were told they could trust, and has been characterised in a way such that they believe it can reason. When they then have their expectations subverted, they query for its reasoning, not understanding that it doesn't have this. It's more sad, like trying to reach for connection and reason where there is none.
The problem here isn't overt, intentional ignorance, but people being misled and struggling with a technology that fakes connection and reasoning. Rather than being angry at them, I feel sad for them. We should invest significant effort in tech literacy so that people understand why they shouldn't trust these things, which will inherently reduce, if not totally eradicate, their reliance on this technology. Dismissing their actions as stupid or malicious in the meantime only sharpens the wedge between people who understand why these things must not be used or trusted, and those who do use and trust them. -
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin they can get a snazzy text of what an AI would sound like if it had conscience though.
I do wonder if they actually double checked if what the AI told them is actually correct.
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin I absolutely agree. It isn’t the most egregious case I’ve seen though, at least they didn’t get it to write an apology letter to them and imagine that it had learned something from the event / letter. That one broke me.
I’m not worried about the machines getting smarter, I’m worried about the people doing the opposite.
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin apparently someone is trying to train up an AI therapist model too https://www.proofnews.org/womans-talkspace-therapy-app-sessions-exposed-in-court/
(That said,
on your post) -
@munin apparently someone is trying to train up an AI therapist model too https://www.proofnews.org/womans-talkspace-therapy-app-sessions-exposed-in-court/
(That said,
on your post)Multiple. And the lack of confidentiality is a huge fucking issue, and the lack of HIPAA compliance is another, and this is purely harming people.
-
@munin@infosec.exchange To be honest, I have some amount of sympathy for this behaviour. This is someone who put their trust in something they were told they could trust, and has been characterised in a way such that they believe it can reason. When they then have their expectations subverted, they query for its reasoning, not understanding that it doesn't have this. It's more sad, like trying to reach for connection and reason where there is none.
The problem here isn't overt, intentional ignorance, but people being misled and struggling with a technology that fakes connection and reasoning. Rather than being angry at them, I feel sad for them. We should invest significant effort in tech literacy so that people understand why they shouldn't trust these things, which will inherently reduce, if not totally eradicate, their reliance on this technology. Dismissing their actions as stupid or malicious in the meantime only sharpens the wedge between people who understand why these things must not be used or trusted, and those who do use and trust them.I do not give a shit about whatever excuses these assholes puke out.
They made a series of considered choices that caused significant harm. There were ample opportunities to avoid this and they chose to continue.
I do not care and I wish them as much pain as they can handle.
