Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

Scheduled Pinned Locked Moved Uncategorized
94 Posts 50 Posters 70 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • riley@toot.catR riley@toot.cat

    This confusion is also what cold reading is based on, btw. Falling for a chatbot is literally the same type of mistake as falling for a psychic telling you that somebody you used to know who had a vowel in their name died.

    gbargoud@masto.nycG This user is from outside of this forum
    gbargoud@masto.nycG This user is from outside of this forum
    gbargoud@masto.nyc
    wrote last edited by
    #50

    @riley

    @baldur 's LLMentalist article is something I have shared repeatedly since it first came out:

    Link Preview Image
    The LLMentalist Effect: how chat-based Large Language Models rep…

    How to make better software with systems-thinking

    favicon

    Out of the Software Crisis (softwarecrisis.dev)

    riley@toot.catR 1 Reply Last reply
    0
    • stefano@mastodon.bsd.cafeS stefano@mastodon.bsd.cafe shared this topic
    • M modulux@node.isonomia.net

      @riley Yes, I heard about it; the most elegant possible proof for a given theorem, roughly?

      Rather I was thinking of the notion you stated that proofs aren't information, and I see why you said it. But it doesn't seem intuitive when we compare it to other ways we use the notion.

      For example let's say we have a composite number pq. Generally speaking, we would say that getting p and q is additional information. But the proof that some p in particular and some q in particular result in pq would contain no information. It's rather odd to think of.

      riley@toot.catR This user is from outside of this forum
      riley@toot.catR This user is from outside of this forum
      riley@toot.cat
      wrote last edited by
      #51

      @modulux You know how numeric probabilities can vary depending on how equipotentiality is defined, and it sometimes be left implicit with multiple equally plausible "obvious" definitions?

      Modelling the information flow of abstract mathematics as such runs into this same sort of problems. Nobody has axiomatised it; there's a bunch of common intuitive assumptions, but a lot of them are ... well, you can pry them loose and justify it if you want to, and sometimes, get interesting results this way. But a lot of times, you don't get anything, or maybe you will have to nail down your own (quasi)-axioms first. These aren't like the axioms of modern geometry; they're really kind of like what Eukleides wrote in the beginning of The Elements, and then never did anything with because it didn't make any sense.[1]

      So you see why I suggested a huge mug of beer for dealing with this stuff.

      [1] Caveat: if you go searching, a lot of sources offer modern axiomatic geometry instead of Eukleides' original work — still because his vague notion of foundations didn't make sense, and now we actually have the axioms that could have been used for the conclusions he went on to, pardon the pun, draw. Most of the rigorisation work was done in the 1600s' Italy; the lingering hairy problem of the Parallels' Axioms was eventually solved by Lobachevskiy in early 1800s by demonstrating that it can be reversed without breaking anything else, and Euklidean geometry as understood by moden mathematics generally rests on Hilbert's[2] work from the pinnacle of the 19th century, as in, it was published in 1899. But it can be great fun to read translations of the original Elements, including the crappy parts.

      [2] You might have heard of his hotel, which has a countable infinity number of rooms. Ijon Tichy was a repeat customer.

      riley@toot.catR 1 Reply Last reply
      0
      • R relay@relay.publicsquare.global shared this topic
      • gbargoud@masto.nycG gbargoud@masto.nyc

        @riley

        @baldur 's LLMentalist article is something I have shared repeatedly since it first came out:

        Link Preview Image
        The LLMentalist Effect: how chat-based Large Language Models rep…

        How to make better software with systems-thinking

        favicon

        Out of the Software Crisis (softwarecrisis.dev)

        riley@toot.catR This user is from outside of this forum
        riley@toot.catR This user is from outside of this forum
        riley@toot.cat
        wrote last edited by
        #52

        @gbargoud Thanks! I didn't know of this article.

        @baldur

        1 Reply Last reply
        0
        • riley@toot.catR riley@toot.cat

          @modulux You know how numeric probabilities can vary depending on how equipotentiality is defined, and it sometimes be left implicit with multiple equally plausible "obvious" definitions?

          Modelling the information flow of abstract mathematics as such runs into this same sort of problems. Nobody has axiomatised it; there's a bunch of common intuitive assumptions, but a lot of them are ... well, you can pry them loose and justify it if you want to, and sometimes, get interesting results this way. But a lot of times, you don't get anything, or maybe you will have to nail down your own (quasi)-axioms first. These aren't like the axioms of modern geometry; they're really kind of like what Eukleides wrote in the beginning of The Elements, and then never did anything with because it didn't make any sense.[1]

          So you see why I suggested a huge mug of beer for dealing with this stuff.

          [1] Caveat: if you go searching, a lot of sources offer modern axiomatic geometry instead of Eukleides' original work — still because his vague notion of foundations didn't make sense, and now we actually have the axioms that could have been used for the conclusions he went on to, pardon the pun, draw. Most of the rigorisation work was done in the 1600s' Italy; the lingering hairy problem of the Parallels' Axioms was eventually solved by Lobachevskiy in early 1800s by demonstrating that it can be reversed without breaking anything else, and Euklidean geometry as understood by moden mathematics generally rests on Hilbert's[2] work from the pinnacle of the 19th century, as in, it was published in 1899. But it can be great fun to read translations of the original Elements, including the crappy parts.

          [2] You might have heard of his hotel, which has a countable infinity number of rooms. Ijon Tichy was a repeat customer.

          riley@toot.catR This user is from outside of this forum
          riley@toot.catR This user is from outside of this forum
          riley@toot.cat
          wrote last edited by
          #53

          @modulux Oh, btw: Turing's Machines are this way, in part, because they genuinely used to try and go with the notion of information flows in mathematics being like frictionless spherical cows in vacuum. For some things, it's a great simplifications; for others, well, it didn't work out, and we ended up having Complexity Theory.

          1 Reply Last reply
          0
          • riley@toot.catR riley@toot.cat

            The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

            A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

            This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

            smohc_stahc@mastodon.gamedev.placeS This user is from outside of this forum
            smohc_stahc@mastodon.gamedev.placeS This user is from outside of this forum
            smohc_stahc@mastodon.gamedev.place
            wrote last edited by
            #54

            @riley Let's say I constructed an elevator with 12 floors. The elevator stops at the next floor every hour on the hour starting from the ground floor at noon and returning to the ground floor at midnight at which point the process repeats. There is a window on the door which shows a broken clock for each floor. Ground floor clock is broken at 12, the next at 1 and so on.

            Consider the nature of a fool who gets locked in the elevator and does not know the time. Does the broken clock inform him?

            riley@toot.catR 1 Reply Last reply
            0
            • proedie@mastodon.greenP proedie@mastodon.green

              @riley That’s the point. You got information theory right. You just misunderstood the expression with the clock.

              When I say: ‘My AI gave me a correct answer once’, you can reply: ‘Sure, even a broken clock is correct twice a day.’ Thus stressing that coincidental correctness is worthless.

              jonoleth@mastodon.socialJ This user is from outside of this forum
              jonoleth@mastodon.socialJ This user is from outside of this forum
              jonoleth@mastodon.social
              wrote last edited by
              #55

              @proedie @riley given a cursory googling and this reddit poll, it doesn't seem like the meaning is that clear to the average person

              Link Preview Image

              favicon

              (www.reddit.com)

              jonoleth@mastodon.socialJ 1 Reply Last reply
              0
              • riley@toot.catR riley@toot.cat

                The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

                A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

                This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

                wyatt_h_knott@vermont.masto.hostW This user is from outside of this forum
                wyatt_h_knott@vermont.masto.hostW This user is from outside of this forum
                wyatt_h_knott@vermont.masto.host
                wrote last edited by
                #56

                @riley Now do "even a blind squirrel occasionally finds a nut"

                1 Reply Last reply
                0
                • riley@toot.catR riley@toot.cat

                  @MissConstrue Are you a chatbot sycophanting me up?

                  These days, one can never be too cautious.

                  cptbutton@dice.campC This user is from outside of this forum
                  cptbutton@dice.campC This user is from outside of this forum
                  cptbutton@dice.camp
                  wrote last edited by
                  #57

                  @riley @MissConstrue

                  Are you very concerned that a chatbot sycophanting you up?

                  riley@toot.catR 1 Reply Last reply
                  0
                  • riley@toot.catR riley@toot.cat

                    The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

                    A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

                    This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

                    mark@mastodon.fixermark.comM This user is from outside of this forum
                    mark@mastodon.fixermark.comM This user is from outside of this forum
                    mark@mastodon.fixermark.com
                    wrote last edited by
                    #58

                    @riley I think this overstates the problem a bit; it either implies that knowledge transfer is impossible (replace "humans" with "chatbots" in the last sentences) or it assumes humans querying chatbots can't have a method to verify the information but not generate the information to verify (unless that assumption wasn't implied, in which case nevermind!).

                    There is a name for the logical state you describe about clocks, but I can't remember it right now. I've heard it referred to as the 'stone cow problem': you see a field. You see a cow in the field. You declare there's a cow in the field. What you saw was actually a convincing cow statue, so you're wrong... But there is a cow sleeping behind the statue that you cannot see, so you're right. Big ol' chunks of software engineering puzzles end up being of this kind (any time two systems are manipulating the same memory, there's a risk that system 2 is manipulating state system 1 should be touching, but it is giving the answer system 1 would give, even if the semantic meaning of the answer is entirely different and it's just dumb luck that the bit patterns representing the answers are the same. So your debugging shows no problems and then problems pop up when the behavior of system 2 changes but you think system 1 changed, because you thought system 1 was controlling the data, etc.

                    1 Reply Last reply
                    0
                    • cptbutton@dice.campC cptbutton@dice.camp

                      @riley @MissConstrue

                      Are you very concerned that a chatbot sycophanting you up?

                      riley@toot.catR This user is from outside of this forum
                      riley@toot.catR This user is from outside of this forum
                      riley@toot.cat
                      wrote last edited by
                      #59

                      @cptbutton Tell me about your parent directory.

                      </eliza>

                      @MissConstrue

                      missconstrue@mefi.socialM 1 Reply Last reply
                      0
                      • edbo@mastodon.socialE edbo@mastodon.social

                        @riley That actually really clears up how I feel when I very occasionally test an LLM. It gives me an answer but I just cannot trust that answer unless I already know.

                        galbinuscaeli@spacey.spaceG This user is from outside of this forum
                        galbinuscaeli@spacey.spaceG This user is from outside of this forum
                        galbinuscaeli@spacey.space
                        wrote last edited by
                        #60

                        @edbo @riley This is also an illustration of why LLMs have a (very limited) utility in generating computer code.

                        Computer code has a specific purpose. The generated code can be tested against the task. This can be useful.

                        But computer code will also have other effects and costs that only a human can validate well.

                        At most an LLM should be used to generate rough drafts of well defined functions that will be reviewed and tuned by a qualified human.

                        1 Reply Last reply
                        0
                        • riley@toot.catR riley@toot.cat

                          The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

                          A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

                          This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

                          jackwilliambell@rustedneuron.comJ This user is from outside of this forum
                          jackwilliambell@rustedneuron.comJ This user is from outside of this forum
                          jackwilliambell@rustedneuron.com
                          wrote last edited by
                          #61

                          @riley

                          FWIW? There is a branch of philosophy focused on the problem you describe – one so old we use an ancient Greek name for it:

                          > https://en.wikipedia.org/wiki/Epistemology

                          This is because determining if information is true and actionable has *always* been fraught. AI merely adds a brand new way to get wrong information.

                          The underlying problem arises when people uncritically believe *anything* from *any source*; human or machine. This is why science has protocols for publishing and re-creating results.

                          1 Reply Last reply
                          0
                          • smohc_stahc@mastodon.gamedev.placeS smohc_stahc@mastodon.gamedev.place

                            @riley Let's say I constructed an elevator with 12 floors. The elevator stops at the next floor every hour on the hour starting from the ground floor at noon and returning to the ground floor at midnight at which point the process repeats. There is a window on the door which shows a broken clock for each floor. Ground floor clock is broken at 12, the next at 1 and so on.

                            Consider the nature of a fool who gets locked in the elevator and does not know the time. Does the broken clock inform him?

                            riley@toot.catR This user is from outside of this forum
                            riley@toot.catR This user is from outside of this forum
                            riley@toot.cat
                            wrote last edited by
                            #62

                            @Smohc_Stahc If we made a hammer out of dynamite, would it be a hammer or dynamite?

                            smohc_stahc@mastodon.gamedev.placeS 1 Reply Last reply
                            0
                            • riley@toot.catR riley@toot.cat

                              The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

                              A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

                              This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

                              emassey0135@caneandable.socialE This user is from outside of this forum
                              emassey0135@caneandable.socialE This user is from outside of this forum
                              emassey0135@caneandable.social
                              wrote last edited by
                              #63

                              @riley @matt But information always has a probability value attached to it. For the broken clock, it is pretty much 0% likely that the time will be correct (1 in 12 times 60 = 1 in 720). But for the LLM, the probability could be 70% to 90% depending on what kind of information you are asking it for and how good the specific LLM is. Information becomes more useful as the probability of it being correct approaches 100%. A good reliable source would have a much higher probability of being correct and therefore be more useful, but the LLM is closer to that than to a broken clock at least for most things.

                              riley@toot.catR 1 Reply Last reply
                              0
                              • riley@toot.catR riley@toot.cat

                                @MissConstrue Are you a chatbot sycophanting me up?

                                These days, one can never be too cautious.

                                missconstrue@mefi.socialM This user is from outside of this forum
                                missconstrue@mefi.socialM This user is from outside of this forum
                                missconstrue@mefi.social
                                wrote last edited by
                                #64

                                @riley Thats a very good question and you are so clever to think of it, I’d be happy to drill down on this topic for you.

                                Heh, sorry. Not a chatbot. Old philosopher, so...like a chatbot, only caffeine powered, argumentative and capable of consciousness. (Or at least, I would argue I’m conscious.) I honestly did believe it was a very illustrative analogy. Most people will parrot the clock paradigm; ie right twice a day, when you are correct that the underlying logic of the premise is faulty, and therefore any attempt to treat it as true will fail.

                                riley@toot.catR contrasocial@mastodon.socialC 2 Replies Last reply
                                0
                                • riley@toot.catR riley@toot.cat

                                  @cptbutton Tell me about your parent directory.

                                  </eliza>

                                  @MissConstrue

                                  missconstrue@mefi.socialM This user is from outside of this forum
                                  missconstrue@mefi.socialM This user is from outside of this forum
                                  missconstrue@mefi.social
                                  wrote last edited by
                                  #65

                                  @riley @cptbutton I never really knew my root...

                                  1 Reply Last reply
                                  0
                                  • missconstrue@mefi.socialM missconstrue@mefi.social

                                    @riley Thats a very good question and you are so clever to think of it, I’d be happy to drill down on this topic for you.

                                    Heh, sorry. Not a chatbot. Old philosopher, so...like a chatbot, only caffeine powered, argumentative and capable of consciousness. (Or at least, I would argue I’m conscious.) I honestly did believe it was a very illustrative analogy. Most people will parrot the clock paradigm; ie right twice a day, when you are correct that the underlying logic of the premise is faulty, and therefore any attempt to treat it as true will fail.

                                    riley@toot.catR This user is from outside of this forum
                                    riley@toot.catR This user is from outside of this forum
                                    riley@toot.cat
                                    wrote last edited by
                                    #66

                                    @MissConstrue There's an interesting pattern to a large number of these faults, but I guess it'll be a topic for another day.

                                    vfrmedia@social.tchncs.deV 1 Reply Last reply
                                    0
                                    • riley@toot.catR riley@toot.cat

                                      The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

                                      A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

                                      This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

                                      onekind@beige.partyO This user is from outside of this forum
                                      onekind@beige.partyO This user is from outside of this forum
                                      onekind@beige.party
                                      wrote last edited by
                                      #67

                                      @riley Riley, are you aware that linguistics in the 60s established language use conveys meaning by reference to other language with no guaranteed relation with some external reality? So all words bear the same relationship with reality a stopped clock has with actual time.

                                      I mention this because LLMs are not designed to provide information about the world, they're designed to generate discourse — language use (its output) that is validly constructed by reference to other language use (its training dataset). It's not fair to judge an LLM on the basis it's a lousy search engine.

                                      But if you spin up a RAG like NotebookLM and give it a reality to refer to (a set of documents) and then ask it a question i.e. is XYZ in the document set, turns out LLMs can do a pretty good job of accurately answering yes or no.

                                      pedromj@mastodon.socialP 1 Reply Last reply
                                      0
                                      • emassey0135@caneandable.socialE emassey0135@caneandable.social

                                        @riley @matt But information always has a probability value attached to it. For the broken clock, it is pretty much 0% likely that the time will be correct (1 in 12 times 60 = 1 in 720). But for the LLM, the probability could be 70% to 90% depending on what kind of information you are asking it for and how good the specific LLM is. Information becomes more useful as the probability of it being correct approaches 100%. A good reliable source would have a much higher probability of being correct and therefore be more useful, but the LLM is closer to that than to a broken clock at least for most things.

                                        riley@toot.catR This user is from outside of this forum
                                        riley@toot.catR This user is from outside of this forum
                                        riley@toot.cat
                                        wrote last edited by
                                        #68

                                        @emassey0135 So it is with other commercial products. That's why there's rules specifying that berries for human consumption can't contain more than something like four aphids per a hundred grammes.

                                        But who would buy jam with 30% aphid content? Even 10% aphid content, really?

                                        @matt

                                        1 Reply Last reply
                                        0
                                        • riley@toot.catR riley@toot.cat

                                          @MissConstrue There's an interesting pattern to a large number of these faults, but I guess it'll be a topic for another day.

                                          vfrmedia@social.tchncs.deV This user is from outside of this forum
                                          vfrmedia@social.tchncs.deV This user is from outside of this forum
                                          vfrmedia@social.tchncs.de
                                          wrote last edited by
                                          #69

                                          @riley @MissConstrue

                                          I was thinking of some equipment I saw at a "Telekom-Museum" in Germany - it contained a clock but wasn't always powered on (or was just a display piece)

                                          The Germans had quite sensibly put a diagonal strip of red tape (in the style of the "Universal No" symbol) across the clock face, so you knew it was *not* a timepiece to be trusted..

                                          riley@toot.catR 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups