So, something that's been bugging the shit out of me?
-
@munin So many people fail to understand them at a functional level.
They have uses, but they aren't some sentient oracle fulfilling humanities wishes.
It does not help that the assholes shitting this toxin into the public sphere routinely and obviously lie about its capabilities.
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin <me staring at the cat> So, why'd you bite me, bro?
cat: …
me: I mean, that's really bleeding there…
cat: …
me: Like, I gotta get a bandage and everything…?
cat: mrp? -
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin I bet there are real logs to be grabbed from LLMs but they would look more like some kind of horrifically tangled node graph than anything readable by a normal human. I strongly doubt any AI company would actually allow users access to those kind of real debug logs from their models since that would be a goldmine for anyone looking to clone them.
-
@munin I bet there are real logs to be grabbed from LLMs but they would look more like some kind of horrifically tangled node graph than anything readable by a normal human. I strongly doubt any AI company would actually allow users access to those kind of real debug logs from their models since that would be a goldmine for anyone looking to clone them.
I'm not talking about anything from the LLM provider's side.
I am talking about the end user here, who has failed to set up their environment properly with logging and monitoring implemented to understand what the fuck is going on in their own production environment.
-
It does not help that the assholes shitting this toxin into the public sphere routinely and obviously lie about its capabilities.
@munin And absolutely are incentivized by doing it.
-
I'm not talking about anything from the LLM provider's side.
I am talking about the end user here, who has failed to set up their environment properly with logging and monitoring implemented to understand what the fuck is going on in their own production environment.
@munin fair. My point was somewhat tangential. The LLM providers most likely don't want to provide useful logs to the end user in the first place because of their paranoia about being reverse-engineered. End users are behind the 8 ball from the start.
IMO "Agentic" tools should be hurled into the sun. Not asked nicely "pweeze tell me why did you delete all the things?"
-
@munin fair. My point was somewhat tangential. The LLM providers most likely don't want to provide useful logs to the end user in the first place because of their paranoia about being reverse-engineered. End users are behind the 8 ball from the start.
IMO "Agentic" tools should be hurled into the sun. Not asked nicely "pweeze tell me why did you delete all the things?"
they don't provide those because they don't have them.
look at the 'claude' leak: it is all vibed slop.
there are no logs. there is no human understanding of these systems. there is no intent or competence. it is all defective horseshit.
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin see also: "hallucinated". fucking STOP IT. It did not hallucinate anything it is not even remotely capable of thinking or imagining or understanding anything at all, ever, and never will be. It just regurgitates shit that looks statistically similar to the words you put in
I'm so tired of having to revise my expectations of people downwards AGAIN. I did not know a hole this deep was possible
-
@munin see also: "hallucinated". fucking STOP IT. It did not hallucinate anything it is not even remotely capable of thinking or imagining or understanding anything at all, ever, and never will be. It just regurgitates shit that looks statistically similar to the words you put in
I'm so tired of having to revise my expectations of people downwards AGAIN. I did not know a hole this deep was possible
I like the term "bullshit" here. The LLM produces a series of symbols with no connection to the concept of meaning, mearely to statistical frequency. Whether the output is accurate or not provides no insight, because there was no intent. It is effectively as close to a philosophical zombie as we have come.
Any other interpretation is anthropomorphism all the way down.
It might get things correct, but this is the same way random pictures of clocks may show the right time.
-
I like the term "bullshit" here. The LLM produces a series of symbols with no connection to the concept of meaning, mearely to statistical frequency. Whether the output is accurate or not provides no insight, because there was no intent. It is effectively as close to a philosophical zombie as we have come.
Any other interpretation is anthropomorphism all the way down.
It might get things correct, but this is the same way random pictures of clocks may show the right time.
@theeclecticdyslexic @sinvega @munin I like talking *about* bullshit, especially when you tell people you mean in the sense coined by Harry Frankfurt in his 1986 paper, and they look at you like this is an example, isn't it?
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin Well... As you mentioned, each generation "reads the prompt and any cache, if they exist, from prior session", and since they were trained on "explaining" their previous outputs to sound like a relevant discussion, asking why a model gave a specific output isn't as stupid as you make it out to be, as it can give some input on the "thinking" part of the previous output that is usually not directly visible. That can then be used to tweak prompts and add some guardrails (even though there is a fairly long list of examples of guardrails not being fully effective). Of course, the first problem is giving access to prod to an unreliable system...
-
@munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin It seems to tie in which the trend that started awhile back (2016-2018) where people would respond to other people online by telling them to "go google it".
It came from a place of frustration with reply guys but I think there was a cultural effect of people being told to fuck off and stop asking actual people questions.
Curiosity deflected is often a social wound.
Communities that handled dumb questions well always felt like places of refuge. Things shifted and it opened the door.
-
@munin It seems to tie in which the trend that started awhile back (2016-2018) where people would respond to other people online by telling them to "go google it".
It came from a place of frustration with reply guys but I think there was a cultural effect of people being told to fuck off and stop asking actual people questions.
Curiosity deflected is often a social wound.
Communities that handled dumb questions well always felt like places of refuge. Things shifted and it opened the door.
@munin Also, not arguing against anything you've said.
Its been sad seeing different online places slowly give over the human side of the community to various resources that paved the way towards LLM dependence.
I've been online since the 90s and folks have always had shitty moments but the outsourcing of community knowledge to google did seem to prime folks who are more vulnerable in ways that LLMs are calibrated to cater to.
-
@munin How did we get to the point of _asking_ the computer? You don't ask a computer, you tell it. You give it a command and it either succeeds or it fails or or it is broken. It's a complicated box of sand. There's no awareness, no spark, just an odd arrangement of doped silicon and metal. Believing there's more than that is deeply deeply delusional, like believing socks are sentient because you made a sock puppet once.
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
It's a cargo cult.
-
@crowbriarhexe @munin I had a sock puppet character that loved Brazilian steakhouse ("Fogo de Chão! MEAT ON SWORDS!") and was a huge proponent of self-betterment through community college. But that was me - the sock was just a vessel, a conduit.
-
R relay@relay.publicsquare.global shared this topicR relay@relay.mycrowd.ca shared this topic
-
I like the term "bullshit" here. The LLM produces a series of symbols with no connection to the concept of meaning, mearely to statistical frequency. Whether the output is accurate or not provides no insight, because there was no intent. It is effectively as close to a philosophical zombie as we have come.
Any other interpretation is anthropomorphism all the way down.
It might get things correct, but this is the same way random pictures of clocks may show the right time.
@theeclecticdyslexic @sinvega @munin
Another candidate is "confabulation" -
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin If I do s/LLM/CEO/ (vimspeak for replace LLM with CEO), and the CEO has a Pavlovian programmed response to ignore their own humanity and empathy in service of personal profit, is the result really any different?
There are days I think we'd all be better off with AI CEOs and union memberships, because as the union steward I could at least examine the model weight activation of why the AI did what it did.
In human CEOs, I can only infer what that motivation might have been, where in an open model with transparent training data, I can mathematically determine what caused a particular token to be generated.
Of course, in the current "state of the art" with proprietary models and training data sets, only the nation-state funded hackers really have the means and the motive to go inspecting why an LLM does what it does. Maybe that's not that much different than what the conspiracy theorists have been saying about the ruling class.
