So, something that's been bugging the shit out of me?
-
@petealexharris @munin Where did I say they'd understand a better prompt?
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin LLMs exploits the weakest flaw in human race: the tendency to anthropomorphize anything. It works so well that llm users dont even notice.
-
So, something that's been bugging the shit out of me?
These fucking assholes who let LLMs run rampant and delete prod?
They query the LLM for "why" it did that.
This is delusional behavior.
LLMs do not have a concept of 'why': they assemble a response based on a statistical sampling of likely continuations of the original prompt in their database.
LLMs do not have the ability to have motivation. It is a machine.
LLMs, further, function by instantiating a new runtime -for each query- that reads the prompt and any cache, if they exist, from prior sessions:
which means, fundamentally, "asking" the LLM to explain "why" "it" did a thing is thrice-divorced from reality:
It cannot have a why;
It cannot have a self to have motivations;
And the LLM you ask is not the one that did it, but is a new instance reading from its predecessors notes.Treating it as tho it is an entity with continuity of existence is fucking delusional and I am fucking sick of pandering to this horseshit.
Touch some grass and get a fucking therapist.
@munin is this a copypasta -
@petealexharris @munin When you ask an LLM "why is the sky blue?", it is statistically likely to give a correct answer. It still works the same way, computing probabilities of what the next token is, but the "why" has a semantically significant weight that influences the output, so it is an important keyword. It doesn't have to "understand" it, it just has to be trained in a way that makes it significant. You don't have to believe that it understand things to know that it is trained on human language and will behave correctly when fed human language.
-
@munin@infosec.exchange To be honest, I have some amount of sympathy for this behaviour. This is someone who put their trust in something they were told they could trust, and has been characterised in a way such that they believe it can reason. When they then have their expectations subverted, they query for its reasoning, not understanding that it doesn't have this. It's more sad, like trying to reach for connection and reason where there is none.
The problem here isn't overt, intentional ignorance, but people being misled and struggling with a technology that fakes connection and reasoning. Rather than being angry at them, I feel sad for them. We should invest significant effort in tech literacy so that people understand why they shouldn't trust these things, which will inherently reduce, if not totally eradicate, their reliance on this technology. Dismissing their actions as stupid or malicious in the meantime only sharpens the wedge between people who understand why these things must not be used or trusted, and those who do use and trust them.@addison I agree with you. Which is not to say we should forgive what happened (I don't have the complete context but it sounds like something bad to do with production customers) but that we should understand where the people who did this came from
My view *might* be partially influenced by a quote from this piece on "The Rise and Fall of Petty Tyrants" (quote in next message)

The Rise & Fall Of ‘Petty Tyrants’ | NOEMA
History shows that bad leaders can successfully undermine democracy — but the story always ends the same way.
NOEMA (www.noemamag.com)
-
@addison I agree with you. Which is not to say we should forgive what happened (I don't have the complete context but it sounds like something bad to do with production customers) but that we should understand where the people who did this came from
My view *might* be partially influenced by a quote from this piece on "The Rise and Fall of Petty Tyrants" (quote in next message)

The Rise & Fall Of ‘Petty Tyrants’ | NOEMA
History shows that bad leaders can successfully undermine democracy — but the story always ends the same way.
NOEMA (www.noemamag.com)
@addison the quote in question:
> One of the worst mistakes the opposition can make is extending contempt for the tyrant into contempt for the tyrant’s supporters. Most of these supporters sincerely believed that the tyrant would be more likely to solve their problems — often real grievances that the opposition had failed to address. Blaming the supporters denies the reality of the failures and reinforces their support for the tyrant.
-
@addison @munin most of the discourse around this skips over the core problem: a hosting company encouraged its users to use AI to manage their servers, but stored staging and prod and their backups all on one volume, and allowed deletion of that volume without confirmation or warning.
Schadenfreude aside, that's devops/ui/ux incompetence whether the operator at the controls is human or not. Deleting the staging database shouldn't delete the prod backups.
@wilbr@glitch.social @munin@infosec.exchange The core problem is that capitalist forces push us to make tradeoffs between getting things shipped and doing things the right way

But yeah, people shouldn't be able to make this class of mistake in the first place. But they do, for the same reason (in my experience) they end up using LLMs: because it solves the task with less effort, and there is some force pushing them to go for less effort over higher quality/resilience/etc. -
@petealexharris @munin "Why" is definitely a word from the training data, and "why did you do that?" is definitely also part of things asked a lot, that OpenAI and others have trained on, so my point still stands that it is a valid question to ask. Whether the model "understands" the question is just a philosophical question that is irrelevant for the fact that it is a useful question. Of course if you're using it in Prod and it deletes your DB and you think it understands and can improve itself, there are plenty of things you'd need to be corrected on, but saying that everyone asking that question is delusional is just wrong.
-
@petealexharris @munin You misread me. Whether the model "understands" the question is a philosophical question. The non-philosophical question of whether it can give a useful answer is the relevant part, and my whole point is that pointing at the philosophical aspect to belittle people that look at the practical part, assuming that they don't understand it, is dumb.
-
@petealexharris I totally agree with you. And that is also a very different take from the beginning of the discussion, where Fi said that querying LLMs for "why" it does something is "thrice-divorced from reality" and "fucking delusional" and that people doing that should "touch some grass and get a fucking therapist"...
-
@munin is this a copypasta
no.
-
Quentin Tarantino would not think so.
-
@petealexharris @munin You misread me. Whether the model "understands" the question is a philosophical question. The non-philosophical question of whether it can give a useful answer is the relevant part, and my whole point is that pointing at the philosophical aspect to belittle people that look at the practical part, assuming that they don't understand it, is dumb.
can you two take your semantics argument elsewhere; I am not interested in philosophical horseshit when there are specific, practical considerations that are causing specific, enumerable harms.
-
can you two take your semantics argument elsewhere; I am not interested in philosophical horseshit when there are specific, practical considerations that are causing specific, enumerable harms.
@munin @petealexharris Sure, I'll go touch some grass and talk to my therapist about this philosophical horseshit
-
@munin LLMs exploits the weakest flaw in human race: the tendency to anthropomorphize anything. It works so well that llm users dont even notice.
-
@petealexharris @Varpie @munin
"the LLM has no semantic model of reality, only a surface statistical model of language present in the training data."
Absolutely this.
-
@petealexharris @munin Clearly you've never tried it yourself...
@Varpie @petealexharris @munin
I absolutely have. I keep this in mind ALL THE TIME when I test these things and EVERY TIME they can trivially be led into generating pure nonsense by exploiting that fact.
-
@petealexharris @munin "Why" is definitely a word from the training data, and "why did you do that?" is definitely also part of things asked a lot, that OpenAI and others have trained on, so my point still stands that it is a valid question to ask. Whether the model "understands" the question is just a philosophical question that is irrelevant for the fact that it is a useful question. Of course if you're using it in Prod and it deletes your DB and you think it understands and can improve itself, there are plenty of things you'd need to be corrected on, but saying that everyone asking that question is delusional is just wrong.
@Varpie @petealexharris @munin
"Why" is definitely a word from the training data, and "why did you do that?" is definitely also part of things asked a lot, that OpenAI and others have trained on,"
Yes, and the text that follows is an answer to *a different situation*, and so it's basically fanfic about itself. That's all it can ever produce when you ask it "why". Fanfic.
-
@Varpie @petealexharris @munin
"Why" is definitely a word from the training data, and "why did you do that?" is definitely also part of things asked a lot, that OpenAI and others have trained on,"
Yes, and the text that follows is an answer to *a different situation*, and so it's basically fanfic about itself. That's all it can ever produce when you ask it "why". Fanfic.
@resuna @petealexharris @munin You're assuming that there is no other context provided with the question, and that the training does not take into account that context. If I had to train for this specific question, I'd make sure to score positively answers that are relevant to the previous context. Which is what happens, and why it is a valid question to ask your LLM if you want some insight into the context that isn't shown in the UI but still in the discussion.
-
@resuna @petealexharris @munin You're assuming that there is no other context provided with the question, and that the training does not take into account that context. If I had to train for this specific question, I'd make sure to score positively answers that are relevant to the previous context. Which is what happens, and why it is a valid question to ask your LLM if you want some insight into the context that isn't shown in the UI but still in the discussion.
@Varpie @petealexharris @munin
"You're assuming that there is no other context provided with the question, and that the training does not take into account that context. "
Well, yes, I am assuming that. Because the question is "why did you do this thing that nobody expected you to do". The context-specific answer that you *need* is far too nuanced and unpredictable to possibly be explicitly in the training data.