The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.
-
@riley Riley, are you aware that linguistics in the 60s established language use conveys meaning by reference to other language with no guaranteed relation with some external reality? So all words bear the same relationship with reality a stopped clock has with actual time.
I mention this because LLMs are not designed to provide information about the world, they're designed to generate discourse — language use (its output) that is validly constructed by reference to other language use (its training dataset). It's not fair to judge an LLM on the basis it's a lousy search engine.
But if you spin up a RAG like NotebookLM and give it a reality to refer to (a set of documents) and then ask it a question i.e. is XYZ in the document set, turns out LLMs can do a pretty good job of accurately answering yes or no.
@onekind @riley The answer would still be fuzzy -- there would be a ratio of certainty associated to yes and no. Other methods like pattern search could be tuned to be completely certain on the yes or the no -- some even both -- but I think it is impossible to tune stochastic methods in the same way. To conclude, external data is needed to assess the correctness of the answer of an LLM.
-
@hypolite @riley
A computer is not a human, but tools can replace humans to do certain job if not better.
If you don't like dishwashers, laundry machines, sewing machines, tractors and diggers then by all means hire someone to do it, but most of us find it more effective to use machines instead
I would rather focus my time on building more complex things than waste it on doing less complex jobs that a machine (or AI) can easily do in less time -
@onekind @riley The answer would still be fuzzy -- there would be a ratio of certainty associated to yes and no. Other methods like pattern search could be tuned to be completely certain on the yes or the no -- some even both -- but I think it is impossible to tune stochastic methods in the same way. To conclude, external data is needed to assess the correctness of the answer of an LLM.
@pedromj @riley First, you're assuming that a RAG functions the same way as an LLM. It uses a mix of stochastic and deterministic analysis.
Second, a yes or no answer from a human is also 'fuzzy' in the sense that describing a query in language is never entirely precise, for exactly the reasons I discussed in my previous toot, so the answer given is always 'this is my best guess based on my contingent understanding of your imperfectly phrased question.'
Re your conclusion, I already described the document set as an artificially constructed external reality, which satisfies your objection.
-
The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.
A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.
This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.
@riley
Yes, finally someone else gets it! -
@hypolite @riley
A computer is not a human, but tools can replace humans to do certain job if not better.
If you don't like dishwashers, laundry machines, sewing machines, tractors and diggers then by all means hire someone to do it, but most of us find it more effective to use machines instead
I would rather focus my time on building more complex things than waste it on doing less complex jobs that a machine (or AI) can easily do in less time@samir Nobody ever told me to treat my dishwasher as an employee, though, why do you feel compelled to do this with LLM-based AI systems?
And if the benefits of these systems were that clear and on par with previously established machines, we wouldn't have this kind of conversation. The problem still isn't that people are using them wrong.
-
The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.
A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.
This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.
@riley
David Revoy recently mentioned how Pepper's (orange) cat Carrot was wrongly described as black by grokipedia. This made me speculate that it would be just as wrong if Carrot happened to be a black cat. Your post confirms that, thx.
https://framapiaf.org/@davidrevoy/115882389651946345 -
The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.
A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.
This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.
@riley But what if I don't use the chatbot for information but as character in a game?
-
@riley umm... That IS the notion of a broken clock being right twice a day. That just because something is sometimes right means it provides any relevant information. That's the whole point of the metaphor.
-
@riley This process turns dynamite into dynamite. The part is the whole.
However, the elevator is not the whole of the machine. It can be determined that the elevator tells time but which time is a mystery without the broken clocks. The elevator does not fix the clocks either, they are still broken.
@Smohc_Stahc @riley How would the elevator do what it does without a clock? That's about as much a counterexample as saying a clock hand is the same unchanged clock hand all the time so it can't possibly convey information about time.
-
-
-
@Smohc_Stahc @riley How would the elevator do what it does without a clock? That's about as much a counterexample as saying a clock hand is the same unchanged clock hand all the time so it can't possibly convey information about time.
@menos @riley The riddle is about information revealed to the occupant of the elevator and yes a clock with hands and no face does convey less information. The broken clocks act as the face telling the time. Remember my original question "does the broken clock inform?" It's only intended as a counterexample if the answer is "yes".
However the answer is in fact "no" because it is only by assumption that the occupant can tell the time because of the coincidence of the broken clocks.
-
@menos @riley The riddle is about information revealed to the occupant of the elevator and yes a clock with hands and no face does convey less information. The broken clocks act as the face telling the time. Remember my original question "does the broken clock inform?" It's only intended as a counterexample if the answer is "yes".
However the answer is in fact "no" because it is only by assumption that the occupant can tell the time because of the coincidence of the broken clocks.
@Smohc_Stahc @riley When you have a broken clock, or several of them, and a working clock, it's not much of a riddle that the whole thing can be used to tell the time.
-
@riley But what if I don't use the chatbot for information but as character in a game?
@LordCaramac @riley Then either that character's dialogue will be really confusing, or make up lore as they go, or unintentionally reveal plot points, in any case it's of little value to the player compared even to a repeated but hand-written dialogue. -
This confusion is also what cold reading is based on, btw. Falling for a chatbot is literally the same type of mistake as falling for a psychic telling you that somebody you used to know who had a vowel in their name died.
@riley cold callers like this have always struggled in the Polish community
-
@Smohc_Stahc @riley When you have a broken clock, or several of them, and a working clock, it's not much of a riddle that the whole thing can be used to tell the time.
-
@riley Thats a very good question and you are so clever to think of it, I’d be happy to drill down on this topic for you.
Heh, sorry. Not a chatbot. Old philosopher, so...like a chatbot, only caffeine powered, argumentative and capable of consciousness. (Or at least, I would argue I’m conscious.) I honestly did believe it was a very illustrative analogy. Most people will parrot the clock paradigm; ie right twice a day, when you are correct that the underlying logic of the premise is faulty, and therefore any attempt to treat it as true will fail.
In the interest of pedantry (not in defending LLMs), if a person doesn't know what time it is, and doesn't know the clock is broken, and happens to check it at the exact right time they now know what time it is, no?
-
The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.
A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.
This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.
@riley I agree with your conclusion but not your semantics.
"Being right" doesn't mean "providing information", it means "making a true statement".
Twice a day a broken clock is making a true statement about the world, hence it is right.
What the proverb teaches us is that a system making a true statement does not imply that other statements made by the system are true. I agree that this definitely applies to LLMs that generate text. So I think invoking the proverb is appropriate
-
-
@proedie No, that's not how information works. Information is about reducing your uncertainty space. Every time you can exclude half of the uncertainty space, you will have gained one bit of information. If you exclude less than half of the uncertainty space, you will have gained less than a bit of information. Just ask Claude[1].
Looking at broken clock[2] does not reduce your uncertainty space at all, therefore you gain zero bits of information. The classic formula Claude Shannon is famous for involves dividing the volume of the uncertainty space after gaining information with the volume of the uncertainty space before gaining information, and then taking a base-2 logarithm of the ratio and negating it. If you don't care a minus one bit about negative amounts of data, you can turn the ratio on its top; then, negation won't be necessary. But there's didactic reasons for presenting it in the classic way.
[1] Claude Shannon, an overall smart human and a measurer of the enthropy of information. Who were you thinking about?
[2] Well, there's the minor issue of knowing that the clock is broken, lest you erroneously throw out parts of your uncertainty space that might actually be valid. But the problem of information-resembling text is also an issue that applies to chatbots. -
R relay@relay.mycrowd.ca shared this topic
that's not how it is commonly used.