The constant mental vigilance in a generative world is exhausting.
-
The constant mental vigilance in a generative world is exhausting.
"I asked Claude to do
$thingand it did this!"No it didn't. No you didn't. Probably none of that happened.
And somehow, being unwilling to admit the thing is just making stuff up is annoying and unnecessary, not the damn model.
I can't get people to understand that the "hallucination" problem is unsolvable because "hallucination" is how it works. That's all it does. Next tokens based on the whole previous series of tokens that represent "the conversation" being had between prompts and responses combined with the hidden prompts that give the thing its flavor. The fact that it is "right" isn't part of it. That's why they never say, "I don't know". They don't know anything. They are literally making it up every single time. It's why they are so expensive and why they are ruining the environment. There is no recall, no memory, no "knowing". As I've seen it said elsewhere, "there is no 'there' there". It's worse than the Chinese Room thought experiment because at least that produces correct responses. This creates the illusion of a correct response. We are killing the earth and building an inescapable surveillance state around technology that will never get any better than it is right now.
-
M mttaggart@infosec.exchange shared this topic
-
I can't get people to understand that the "hallucination" problem is unsolvable because "hallucination" is how it works. That's all it does. Next tokens based on the whole previous series of tokens that represent "the conversation" being had between prompts and responses combined with the hidden prompts that give the thing its flavor. The fact that it is "right" isn't part of it. That's why they never say, "I don't know". They don't know anything. They are literally making it up every single time. It's why they are so expensive and why they are ruining the environment. There is no recall, no memory, no "knowing". As I've seen it said elsewhere, "there is no 'there' there". It's worse than the Chinese Room thought experiment because at least that produces correct responses. This creates the illusion of a correct response. We are killing the earth and building an inescapable surveillance state around technology that will never get any better than it is right now.
@jrdepriest The one that gets me is the "reasoning" models. They're just making up more text to fluff the context! No thought is happening, nor can it! It's maddening.
-
I can't get people to understand that the "hallucination" problem is unsolvable because "hallucination" is how it works. That's all it does. Next tokens based on the whole previous series of tokens that represent "the conversation" being had between prompts and responses combined with the hidden prompts that give the thing its flavor. The fact that it is "right" isn't part of it. That's why they never say, "I don't know". They don't know anything. They are literally making it up every single time. It's why they are so expensive and why they are ruining the environment. There is no recall, no memory, no "knowing". As I've seen it said elsewhere, "there is no 'there' there". It's worse than the Chinese Room thought experiment because at least that produces correct responses. This creates the illusion of a correct response. We are killing the earth and building an inescapable surveillance state around technology that will never get any better than it is right now.
@jrdepriest @mttaggart it's intelligence theatre, the appearance of words that resemble intellect. I can't describe how much it frustrates me that so many are happy to accept the illusion
-
I can't get people to understand that the "hallucination" problem is unsolvable because "hallucination" is how it works. That's all it does. Next tokens based on the whole previous series of tokens that represent "the conversation" being had between prompts and responses combined with the hidden prompts that give the thing its flavor. The fact that it is "right" isn't part of it. That's why they never say, "I don't know". They don't know anything. They are literally making it up every single time. It's why they are so expensive and why they are ruining the environment. There is no recall, no memory, no "knowing". As I've seen it said elsewhere, "there is no 'there' there". It's worse than the Chinese Room thought experiment because at least that produces correct responses. This creates the illusion of a correct response. We are killing the earth and building an inescapable surveillance state around technology that will never get any better than it is right now.
@jrdepriest @mttaggart hallucination is an awful term for it; it implies a form of perception that is being undermined, when no such perception exists. A philosophy professor of mine refers to model output as "bullshit" in that it does not distinct between truth and falsehood, only seeking to accurately reproduce langauage patterns.
-
The constant mental vigilance in a generative world is exhausting.
"I asked Claude to do
$thingand it did this!"No it didn't. No you didn't. Probably none of that happened.
And somehow, being unwilling to admit the thing is just making stuff up is annoying and unnecessary, not the damn model.
@mttaggart The required hypervigilance is exhausting and beyond human capacity to maintain, and so few will admit they can't do it (then there are the ones who take *pride* in refusing vigilance, and I consider them some kind of mad)
-
@jrdepriest The one that gets me is the "reasoning" models. They're just making up more text to fluff the context! No thought is happening, nor can it! It's maddening.
It's the same "model" your know-it-all uncle uses every Thanksgiving: bloviation.
-
@jrdepriest The one that gets me is the "reasoning" models. They're just making up more text to fluff the context! No thought is happening, nor can it! It's maddening.
@mttaggart @jrdepriest it's just working backwards to "explain" it's bullshit answer, or so I heard.
if so that's a straight up con.
-
R relay@relay.publicsquare.global shared this topic
-
The constant mental vigilance in a generative world is exhausting.
"I asked Claude to do
$thingand it did this!"No it didn't. No you didn't. Probably none of that happened.
And somehow, being unwilling to admit the thing is just making stuff up is annoying and unnecessary, not the damn model.
Thats what you and most AI haters miss.
LLMs are right like 90% or so at a time.
Even 80B models are right a big majority of the time. This is why normies and managerial staff are like "this is amazing". They dont know what they dont know.
Its that last 10% that breaks all sorts of stuff and people. And you have to be a specialist in that 10% to know when its bullshitting/hallucinating/lying to you.
-
@jrdepriest @mttaggart hallucination is an awful term for it; it implies a form of perception that is being undermined, when no such perception exists. A philosophy professor of mine refers to model output as "bullshit" in that it does not distinct between truth and falsehood, only seeking to accurately reproduce langauage patterns.
@fireye @jrdepriest @mttaggart
In reality, its high dimensional vector calculus, over the trained material and your words in your context.
All this LLM workings is just vector calculus, that Leibniz devised in the 1700s. We only had the compute to do it in 2012.
Its not thinking, yet. Its not intelligence. Its a stochastic parrot trained on TBs of data, following Leibniz' dream of word-calculus.
We still dont know how consciousness works, or to create a thinking machine.
-
Thats what you and most AI haters miss.
LLMs are right like 90% or so at a time.
Even 80B models are right a big majority of the time. This is why normies and managerial staff are like "this is amazing". They dont know what they dont know.
Its that last 10% that breaks all sorts of stuff and people. And you have to be a specialist in that 10% to know when its bullshitting/hallucinating/lying to you.
@crankylinuxuser @mttaggart It's not "right like 90% or so at a time". The process of "right" happens in your head. You do the interpretation of the data streaming from the model. The bots are nothing without a human at the other end, as a crutch, using a unidirectional, blatant exploitation of Grice's Cooperative Principle.
-
@crankylinuxuser @mttaggart It's not "right like 90% or so at a time". The process of "right" happens in your head. You do the interpretation of the data streaming from the model. The bots are nothing without a human at the other end, as a crutch, using a unidirectional, blatant exploitation of Grice's Cooperative Principle.
@crankylinuxuser @mttaggart This is of course most apparent in the original Eliza, since looking at the rules for Eliza gives us an immediate and very comprehensible peek behind the curtain. The exploitation of the Cooperative Principle is so obvious there that you cannot really deny it.
Modern chatbots just have a better way of hiding it, especially since the apparatus behind the curtain is unfathomably huge.