Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

Scheduled Pinned Locked Moved Uncategorized
94 Posts 50 Posters 70 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • smohc_stahc@mastodon.gamedev.placeS smohc_stahc@mastodon.gamedev.place

    @riley Let's say I constructed an elevator with 12 floors. The elevator stops at the next floor every hour on the hour starting from the ground floor at noon and returning to the ground floor at midnight at which point the process repeats. There is a window on the door which shows a broken clock for each floor. Ground floor clock is broken at 12, the next at 1 and so on.

    Consider the nature of a fool who gets locked in the elevator and does not know the time. Does the broken clock inform him?

    riley@toot.catR This user is from outside of this forum
    riley@toot.catR This user is from outside of this forum
    riley@toot.cat
    wrote last edited by
    #62

    @Smohc_Stahc If we made a hammer out of dynamite, would it be a hammer or dynamite?

    smohc_stahc@mastodon.gamedev.placeS 1 Reply Last reply
    0
    • riley@toot.catR riley@toot.cat

      The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

      A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

      This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

      emassey0135@caneandable.socialE This user is from outside of this forum
      emassey0135@caneandable.socialE This user is from outside of this forum
      emassey0135@caneandable.social
      wrote last edited by
      #63

      @riley @matt But information always has a probability value attached to it. For the broken clock, it is pretty much 0% likely that the time will be correct (1 in 12 times 60 = 1 in 720). But for the LLM, the probability could be 70% to 90% depending on what kind of information you are asking it for and how good the specific LLM is. Information becomes more useful as the probability of it being correct approaches 100%. A good reliable source would have a much higher probability of being correct and therefore be more useful, but the LLM is closer to that than to a broken clock at least for most things.

      riley@toot.catR 1 Reply Last reply
      0
      • riley@toot.catR riley@toot.cat

        @MissConstrue Are you a chatbot sycophanting me up?

        These days, one can never be too cautious.

        missconstrue@mefi.socialM This user is from outside of this forum
        missconstrue@mefi.socialM This user is from outside of this forum
        missconstrue@mefi.social
        wrote last edited by
        #64

        @riley Thats a very good question and you are so clever to think of it, I’d be happy to drill down on this topic for you.

        Heh, sorry. Not a chatbot. Old philosopher, so...like a chatbot, only caffeine powered, argumentative and capable of consciousness. (Or at least, I would argue I’m conscious.) I honestly did believe it was a very illustrative analogy. Most people will parrot the clock paradigm; ie right twice a day, when you are correct that the underlying logic of the premise is faulty, and therefore any attempt to treat it as true will fail.

        riley@toot.catR contrasocial@mastodon.socialC 2 Replies Last reply
        0
        • riley@toot.catR riley@toot.cat

          @cptbutton Tell me about your parent directory.

          </eliza>

          @MissConstrue

          missconstrue@mefi.socialM This user is from outside of this forum
          missconstrue@mefi.socialM This user is from outside of this forum
          missconstrue@mefi.social
          wrote last edited by
          #65

          @riley @cptbutton I never really knew my root...

          1 Reply Last reply
          0
          • missconstrue@mefi.socialM missconstrue@mefi.social

            @riley Thats a very good question and you are so clever to think of it, I’d be happy to drill down on this topic for you.

            Heh, sorry. Not a chatbot. Old philosopher, so...like a chatbot, only caffeine powered, argumentative and capable of consciousness. (Or at least, I would argue I’m conscious.) I honestly did believe it was a very illustrative analogy. Most people will parrot the clock paradigm; ie right twice a day, when you are correct that the underlying logic of the premise is faulty, and therefore any attempt to treat it as true will fail.

            riley@toot.catR This user is from outside of this forum
            riley@toot.catR This user is from outside of this forum
            riley@toot.cat
            wrote last edited by
            #66

            @MissConstrue There's an interesting pattern to a large number of these faults, but I guess it'll be a topic for another day.

            vfrmedia@social.tchncs.deV 1 Reply Last reply
            0
            • riley@toot.catR riley@toot.cat

              The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

              A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

              This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

              onekind@beige.partyO This user is from outside of this forum
              onekind@beige.partyO This user is from outside of this forum
              onekind@beige.party
              wrote last edited by
              #67

              @riley Riley, are you aware that linguistics in the 60s established language use conveys meaning by reference to other language with no guaranteed relation with some external reality? So all words bear the same relationship with reality a stopped clock has with actual time.

              I mention this because LLMs are not designed to provide information about the world, they're designed to generate discourse — language use (its output) that is validly constructed by reference to other language use (its training dataset). It's not fair to judge an LLM on the basis it's a lousy search engine.

              But if you spin up a RAG like NotebookLM and give it a reality to refer to (a set of documents) and then ask it a question i.e. is XYZ in the document set, turns out LLMs can do a pretty good job of accurately answering yes or no.

              pedromj@mastodon.socialP 1 Reply Last reply
              0
              • emassey0135@caneandable.socialE emassey0135@caneandable.social

                @riley @matt But information always has a probability value attached to it. For the broken clock, it is pretty much 0% likely that the time will be correct (1 in 12 times 60 = 1 in 720). But for the LLM, the probability could be 70% to 90% depending on what kind of information you are asking it for and how good the specific LLM is. Information becomes more useful as the probability of it being correct approaches 100%. A good reliable source would have a much higher probability of being correct and therefore be more useful, but the LLM is closer to that than to a broken clock at least for most things.

                riley@toot.catR This user is from outside of this forum
                riley@toot.catR This user is from outside of this forum
                riley@toot.cat
                wrote last edited by
                #68

                @emassey0135 So it is with other commercial products. That's why there's rules specifying that berries for human consumption can't contain more than something like four aphids per a hundred grammes.

                But who would buy jam with 30% aphid content? Even 10% aphid content, really?

                @matt

                1 Reply Last reply
                0
                • riley@toot.catR riley@toot.cat

                  @MissConstrue There's an interesting pattern to a large number of these faults, but I guess it'll be a topic for another day.

                  vfrmedia@social.tchncs.deV This user is from outside of this forum
                  vfrmedia@social.tchncs.deV This user is from outside of this forum
                  vfrmedia@social.tchncs.de
                  wrote last edited by
                  #69

                  @riley @MissConstrue

                  I was thinking of some equipment I saw at a "Telekom-Museum" in Germany - it contained a clock but wasn't always powered on (or was just a display piece)

                  The Germans had quite sensibly put a diagonal strip of red tape (in the style of the "Universal No" symbol) across the clock face, so you knew it was *not* a timepiece to be trusted..

                  riley@toot.catR 1 Reply Last reply
                  0
                  • vfrmedia@social.tchncs.deV vfrmedia@social.tchncs.de

                    @riley @MissConstrue

                    I was thinking of some equipment I saw at a "Telekom-Museum" in Germany - it contained a clock but wasn't always powered on (or was just a display piece)

                    The Germans had quite sensibly put a diagonal strip of red tape (in the style of the "Universal No" symbol) across the clock face, so you knew it was *not* a timepiece to be trusted..

                    riley@toot.catR This user is from outside of this forum
                    riley@toot.catR This user is from outside of this forum
                    riley@toot.cat
                    wrote last edited by
                    #70

                    @vfrmedia In aviation, the process is standardised by way of the INOP stickers.

                    @MissConstrue

                    1 Reply Last reply
                    0
                    • drajt@fosstodon.orgD drajt@fosstodon.org shared this topic
                    • samir@m.fedica.comS samir@m.fedica.com

                      @riley I am sorry, this is not correct analogy
                      The bot not giving you correct information 100% of the time doesn't make them useless
                      A Search engine doesn't give you the correct answer all the time.
                      Chatbots are incredibly helpful. Don't take the answer as 100% correct, review and research accuracy after you get the answer but they save you immense amount of time from searching yourself
                      Think of them as hiring a jr employee or assistant. They are helpful but you must review their work

                      hypolite@friendica.mrpetovan.comH This user is from outside of this forum
                      hypolite@friendica.mrpetovan.comH This user is from outside of this forum
                      hypolite@friendica.mrpetovan.com
                      wrote last edited by
                      #71
                      @samir @riley Why would you ever think of a computer as a human and how does it improve anything?
                      samir@m.fedica.comS 1 Reply Last reply
                      0
                      • jonoleth@mastodon.socialJ jonoleth@mastodon.social

                        @proedie @riley given a cursory googling and this reddit poll, it doesn't seem like the meaning is that clear to the average person

                        Link Preview Image

                        favicon

                        (www.reddit.com)

                        jonoleth@mastodon.socialJ This user is from outside of this forum
                        jonoleth@mastodon.socialJ This user is from outside of this forum
                        jonoleth@mastodon.social
                        wrote last edited by
                        #72

                        @proedie @riley after obsessing a little over getting to the bottom of this, the answer seems to be that the historical origin (from 1711) is akin to "If you stop chasing trends you will sometimes be fashionable", which is more in line with riley's definition in the OP. The other "official" definitions I've found seem to follow this as well.

                        The definition that "coincidental correctness is worthless" seems to be a personal (though common) interpretation.

                        1 Reply Last reply
                        0
                        • riley@toot.catR riley@toot.cat

                          @Smohc_Stahc If we made a hammer out of dynamite, would it be a hammer or dynamite?

                          smohc_stahc@mastodon.gamedev.placeS This user is from outside of this forum
                          smohc_stahc@mastodon.gamedev.placeS This user is from outside of this forum
                          smohc_stahc@mastodon.gamedev.place
                          wrote last edited by
                          #73

                          @riley This process turns dynamite into dynamite. The part is the whole.

                          However, the elevator is not the whole of the machine. It can be determined that the elevator tells time but which time is a mystery without the broken clocks. The elevator does not fix the clocks either, they are still broken.

                          menos@todon.euM 1 Reply Last reply
                          0
                          • bdf2121cc3334b35b6ecda66e471@mastodon.socialB bdf2121cc3334b35b6ecda66e471@mastodon.social

                            @riley @MissConstrue I am not a bot. Please don't look at my name.

                            missconstrue@mefi.socialM This user is from outside of this forum
                            missconstrue@mefi.socialM This user is from outside of this forum
                            missconstrue@mefi.social
                            wrote last edited by
                            #74

                            @bdf2121cc3334b35b6ecda66e471 @riley
                            01001001 00100000 01110011 01100101 01100101 00100000 01111001 01101111 01110101

                            😉

                            1 Reply Last reply
                            0
                            • onekind@beige.partyO onekind@beige.party

                              @riley Riley, are you aware that linguistics in the 60s established language use conveys meaning by reference to other language with no guaranteed relation with some external reality? So all words bear the same relationship with reality a stopped clock has with actual time.

                              I mention this because LLMs are not designed to provide information about the world, they're designed to generate discourse — language use (its output) that is validly constructed by reference to other language use (its training dataset). It's not fair to judge an LLM on the basis it's a lousy search engine.

                              But if you spin up a RAG like NotebookLM and give it a reality to refer to (a set of documents) and then ask it a question i.e. is XYZ in the document set, turns out LLMs can do a pretty good job of accurately answering yes or no.

                              pedromj@mastodon.socialP This user is from outside of this forum
                              pedromj@mastodon.socialP This user is from outside of this forum
                              pedromj@mastodon.social
                              wrote last edited by
                              #75

                              @onekind @riley The answer would still be fuzzy -- there would be a ratio of certainty associated to yes and no. Other methods like pattern search could be tuned to be completely certain on the yes or the no -- some even both -- but I think it is impossible to tune stochastic methods in the same way. To conclude, external data is needed to assess the correctness of the answer of an LLM.

                              onekind@beige.partyO 1 Reply Last reply
                              0
                              • hypolite@friendica.mrpetovan.comH hypolite@friendica.mrpetovan.com
                                @samir @riley Why would you ever think of a computer as a human and how does it improve anything?
                                samir@m.fedica.comS This user is from outside of this forum
                                samir@m.fedica.comS This user is from outside of this forum
                                samir@m.fedica.com
                                wrote last edited by
                                #76

                                @hypolite @riley
                                A computer is not a human, but tools can replace humans to do certain job if not better.
                                If you don't like dishwashers, laundry machines, sewing machines, tractors and diggers then by all means hire someone to do it, but most of us find it more effective to use machines instead
                                I would rather focus my time on building more complex things than waste it on doing less complex jobs that a machine (or AI) can easily do in less time

                                hypolite@friendica.mrpetovan.comH 1 Reply Last reply
                                0
                                • pedromj@mastodon.socialP pedromj@mastodon.social

                                  @onekind @riley The answer would still be fuzzy -- there would be a ratio of certainty associated to yes and no. Other methods like pattern search could be tuned to be completely certain on the yes or the no -- some even both -- but I think it is impossible to tune stochastic methods in the same way. To conclude, external data is needed to assess the correctness of the answer of an LLM.

                                  onekind@beige.partyO This user is from outside of this forum
                                  onekind@beige.partyO This user is from outside of this forum
                                  onekind@beige.party
                                  wrote last edited by
                                  #77

                                  @pedromj @riley First, you're assuming that a RAG functions the same way as an LLM. It uses a mix of stochastic and deterministic analysis.

                                  Second, a yes or no answer from a human is also 'fuzzy' in the sense that describing a query in language is never entirely precise, for exactly the reasons I discussed in my previous toot, so the answer given is always 'this is my best guess based on my contingent understanding of your imperfectly phrased question.'

                                  Re your conclusion, I already described the document set as an artificially constructed external reality, which satisfies your objection.

                                  1 Reply Last reply
                                  0
                                  • riley@toot.catR riley@toot.cat

                                    The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

                                    A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

                                    This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

                                    demi@xeno.glyphpress.comD This user is from outside of this forum
                                    demi@xeno.glyphpress.comD This user is from outside of this forum
                                    demi@xeno.glyphpress.com
                                    wrote last edited by
                                    #78

                                    @riley
                                    Yes, finally someone else gets it!

                                    1 Reply Last reply
                                    0
                                    • samir@m.fedica.comS samir@m.fedica.com

                                      @hypolite @riley
                                      A computer is not a human, but tools can replace humans to do certain job if not better.
                                      If you don't like dishwashers, laundry machines, sewing machines, tractors and diggers then by all means hire someone to do it, but most of us find it more effective to use machines instead
                                      I would rather focus my time on building more complex things than waste it on doing less complex jobs that a machine (or AI) can easily do in less time

                                      hypolite@friendica.mrpetovan.comH This user is from outside of this forum
                                      hypolite@friendica.mrpetovan.comH This user is from outside of this forum
                                      hypolite@friendica.mrpetovan.com
                                      wrote last edited by
                                      #79

                                      @samir Nobody ever told me to treat my dishwasher as an employee, though, why do you feel compelled to do this with LLM-based AI systems?

                                      And if the benefits of these systems were that clear and on par with previously established machines, we wouldn't have this kind of conversation. The problem still isn't that people are using them wrong.

                                      1 Reply Last reply
                                      0
                                      • riley@toot.catR riley@toot.cat

                                        The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

                                        A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

                                        This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

                                        crapaud@mstdn.socialC This user is from outside of this forum
                                        crapaud@mstdn.socialC This user is from outside of this forum
                                        crapaud@mstdn.social
                                        wrote last edited by
                                        #80

                                        @riley
                                        David Revoy recently mentioned how Pepper's (orange) cat Carrot was wrongly described as black by grokipedia. This made me speculate that it would be just as wrong if Carrot happened to be a black cat. Your post confirms that, thx.
                                        https://framapiaf.org/@davidrevoy/115882389651946345

                                        1 Reply Last reply
                                        0
                                        • riley@toot.catR riley@toot.cat

                                          The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

                                          A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

                                          This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

                                          lordcaramac@discordian.socialL This user is from outside of this forum
                                          lordcaramac@discordian.socialL This user is from outside of this forum
                                          lordcaramac@discordian.social
                                          wrote last edited by
                                          #81

                                          @riley But what if I don't use the chatbot for information but as character in a game?

                                          hypolite@friendica.mrpetovan.comH 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups