Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

Scheduled Pinned Locked Moved Uncategorized
94 Posts 50 Posters 70 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • riley@toot.catR riley@toot.cat

    The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

    A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

    This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

    riley@toot.catR This user is from outside of this forum
    riley@toot.catR This user is from outside of this forum
    riley@toot.cat
    wrote last edited by
    #4

    This confusion is also what cold reading is based on, btw. Falling for a chatbot is literally the same type of mistake as falling for a psychic telling you that somebody you used to know who had a vowel in their name died.

    uilebheist@polyglot.cityU gbargoud@masto.nycG asprinkleofsage@mastodon.socialA 3 Replies Last reply
    0
    • proedie@mastodon.greenP proedie@mastodon.green

      @riley Hmmm. I think you got that one wrong. The point of the figure of speech is not to give credit to the clock. The point is to point out that the information is useless.

      riley@toot.catR This user is from outside of this forum
      riley@toot.catR This user is from outside of this forum
      riley@toot.cat
      wrote last edited by
      #5

      @proedie No, that's not how information works. Information is about reducing your uncertainty space. Every time you can exclude half of the uncertainty space, you will have gained one bit of information. If you exclude less than half of the uncertainty space, you will have gained less than a bit of information. Just ask Claude[1].

      Looking at broken clock[2] does not reduce your uncertainty space at all, therefore you gain zero bits of information. The classic formula Claude Shannon is famous for involves dividing the volume of the uncertainty space after gaining information with the volume of the uncertainty space before gaining information, and then taking a base-2 logarithm of the ratio and negating it. If you don't care a minus one bit about negative amounts of data, you can turn the ratio on its top; then, negation won't be necessary. But there's didactic reasons for presenting it in the classic way.

      [1] Claude Shannon, an overall smart human and a measurer of the enthropy of information. Who were you thinking about?
      [2] Well, there's the minor issue of knowing that the clock is broken, lest you erroneously throw out parts of your uncertainty space that might actually be valid. But the problem of information-resembling text is also an issue that applies to chatbots.

      proedie@mastodon.greenP purplelotus13@mastodon.socialP kaitlynethylia@void.lgbtK 3 Replies Last reply
      0
      • missconstrue@mefi.socialM missconstrue@mefi.social

        @riley That is such a brilliantly clear analogy.

        riley@toot.catR This user is from outside of this forum
        riley@toot.catR This user is from outside of this forum
        riley@toot.cat
        wrote last edited by
        #6

        @MissConstrue Are you a chatbot sycophanting me up?

        These days, one can never be too cautious.

        bdf2121cc3334b35b6ecda66e471@mastodon.socialB cptbutton@dice.campC missconstrue@mefi.socialM 3 Replies Last reply
        0
        • riley@toot.catR riley@toot.cat

          The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

          A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

          This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

          evening@alico.nexusE This user is from outside of this forum
          evening@alico.nexusE This user is from outside of this forum
          evening@alico.nexus
          wrote last edited by
          #7

          @riley@toot.cat this is a good point, but it should also be noted that some types of information can be difficult to obtain but easy to verify.

          riley@toot.catR 1 Reply Last reply
          0
          • riley@toot.catR riley@toot.cat

            The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

            A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

            This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

            larsmb@mastodon.onlineL This user is from outside of this forum
            larsmb@mastodon.onlineL This user is from outside of this forum
            larsmb@mastodon.online
            wrote last edited by
            #8

            @riley This misjudges how and why stochastic algorithms work.

            (I am not saying that there is no AI hype, nor that they're ethical.)

            riley@toot.catR 1 Reply Last reply
            0
            • riley@toot.catR riley@toot.cat

              @proedie No, that's not how information works. Information is about reducing your uncertainty space. Every time you can exclude half of the uncertainty space, you will have gained one bit of information. If you exclude less than half of the uncertainty space, you will have gained less than a bit of information. Just ask Claude[1].

              Looking at broken clock[2] does not reduce your uncertainty space at all, therefore you gain zero bits of information. The classic formula Claude Shannon is famous for involves dividing the volume of the uncertainty space after gaining information with the volume of the uncertainty space before gaining information, and then taking a base-2 logarithm of the ratio and negating it. If you don't care a minus one bit about negative amounts of data, you can turn the ratio on its top; then, negation won't be necessary. But there's didactic reasons for presenting it in the classic way.

              [1] Claude Shannon, an overall smart human and a measurer of the enthropy of information. Who were you thinking about?
              [2] Well, there's the minor issue of knowing that the clock is broken, lest you erroneously throw out parts of your uncertainty space that might actually be valid. But the problem of information-resembling text is also an issue that applies to chatbots.

              proedie@mastodon.greenP This user is from outside of this forum
              proedie@mastodon.greenP This user is from outside of this forum
              proedie@mastodon.green
              wrote last edited by
              #9

              @riley That’s the point. You got information theory right. You just misunderstood the expression with the clock.

              When I say: ‘My AI gave me a correct answer once’, you can reply: ‘Sure, even a broken clock is correct twice a day.’ Thus stressing that coincidental correctness is worthless.

              meuwese@mastodon.socialM jonoleth@mastodon.socialJ 2 Replies Last reply
              0
              • riley@toot.catR riley@toot.cat

                The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

                A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

                This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

                greenskyoverme@ohai.socialG This user is from outside of this forum
                greenskyoverme@ohai.socialG This user is from outside of this forum
                greenskyoverme@ohai.social
                wrote last edited by
                #10

                @riley Yes, this!

                1 Reply Last reply
                0
                • riley@toot.catR riley@toot.cat

                  The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

                  A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

                  This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

                  cam@lope.socialC This user is from outside of this forum
                  cam@lope.socialC This user is from outside of this forum
                  cam@lope.social
                  wrote last edited by
                  #11

                  @riley I have to say the analogy is so on point, it autistically satisfied my brain (and my left foot (I don't know why)).

                  1 Reply Last reply
                  0
                  • proedie@mastodon.greenP proedie@mastodon.green

                    @riley That’s the point. You got information theory right. You just misunderstood the expression with the clock.

                    When I say: ‘My AI gave me a correct answer once’, you can reply: ‘Sure, even a broken clock is correct twice a day.’ Thus stressing that coincidental correctness is worthless.

                    meuwese@mastodon.socialM This user is from outside of this forum
                    meuwese@mastodon.socialM This user is from outside of this forum
                    meuwese@mastodon.social
                    wrote last edited by
                    #12

                    @proedie @riley exactly. This is not countering the proverb, this *is* the proverb.

                    1 Reply Last reply
                    0
                    • R relay@relay.an.exchange shared this topic
                    • riley@toot.catR riley@toot.cat

                      This confusion is also what cold reading is based on, btw. Falling for a chatbot is literally the same type of mistake as falling for a psychic telling you that somebody you used to know who had a vowel in their name died.

                      uilebheist@polyglot.cityU This user is from outside of this forum
                      uilebheist@polyglot.cityU This user is from outside of this forum
                      uilebheist@polyglot.city
                      wrote last edited by
                      #13

                      @riley But in my infinite knowledge, I can also add that they died on a day of the week ending with "y"!!!111

                      riley@toot.catR 1 Reply Last reply
                      0
                      • riley@toot.catR riley@toot.cat

                        The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

                        A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

                        This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

                        zombiecide@polyglot.cityZ This user is from outside of this forum
                        zombiecide@polyglot.cityZ This user is from outside of this forum
                        zombiecide@polyglot.city
                        wrote last edited by
                        #14

                        @riley Long before the advent of chatbots I found myself musing about the role "trust" plays when receiving information, based on personal interaction in primary groups, based on roles in secondary groups, based on rules and regulation in tertiary groups, and how internet interaction was a new type that could be any of those or different, but many people tend to use primary group patterns and assumption of familiarity, like conning yourself
                        chatbots do this as a service?

                        riley@toot.catR 1 Reply Last reply
                        0
                        • larsmb@mastodon.onlineL larsmb@mastodon.online

                          @riley This misjudges how and why stochastic algorithms work.

                          (I am not saying that there is no AI hype, nor that they're ethical.)

                          riley@toot.catR This user is from outside of this forum
                          riley@toot.catR This user is from outside of this forum
                          riley@toot.cat
                          wrote last edited by
                          #15

                          @larsmb I'm not entirely sure I understand your point (I might if you fleshed it out some more), but I suspect that a relevant counterpoint you might not have properly considered is, the uncertainty space doesn't have to be flat. It can have an extra axis of plausibility, allowing for fuzzy exclusion of points on it, not just a black-and-white excluded/included binary.

                          larsmb@mastodon.onlineL 1 Reply Last reply
                          0
                          • evening@alico.nexusE evening@alico.nexus

                            @riley@toot.cat this is a good point, but it should also be noted that some types of information can be difficult to obtain but easy to verify.

                            riley@toot.catR This user is from outside of this forum
                            riley@toot.catR This user is from outside of this forum
                            riley@toot.cat
                            wrote last edited by
                            #16

                            @evening That is true.

                            1 Reply Last reply
                            0
                            • uilebheist@polyglot.cityU uilebheist@polyglot.city

                              @riley But in my infinite knowledge, I can also add that they died on a day of the week ending with "y"!!!111

                              riley@toot.catR This user is from outside of this forum
                              riley@toot.catR This user is from outside of this forum
                              riley@toot.cat
                              wrote last edited by
                              #17

                              @Uilebheist The German version is, they died on a masculine day. (German has one day of the week that does not end in -g, der Mittwoch, but all days of the week are masculine.)

                              1 Reply Last reply
                              0
                              • zombiecide@polyglot.cityZ zombiecide@polyglot.city

                                @riley Long before the advent of chatbots I found myself musing about the role "trust" plays when receiving information, based on personal interaction in primary groups, based on roles in secondary groups, based on rules and regulation in tertiary groups, and how internet interaction was a new type that could be any of those or different, but many people tend to use primary group patterns and assumption of familiarity, like conning yourself
                                chatbots do this as a service?

                                riley@toot.catR This user is from outside of this forum
                                riley@toot.catR This user is from outside of this forum
                                riley@toot.cat
                                wrote last edited by
                                #18

                                @zombiecide Well, what did your pondering found out about the combination of — it's purely hypothetical, with absolutely no connection to anything in real world because nobody would ever do something silly like that — the possibility of applying trust heuristics to a bunch of anonymous people writing in a wikiwiki about the sort of stuff that one might look up in an encyclopædia?

                                zombiecide@polyglot.cityZ 1 Reply Last reply
                                0
                                • riley@toot.catR riley@toot.cat

                                  @zombiecide Well, what did your pondering found out about the combination of — it's purely hypothetical, with absolutely no connection to anything in real world because nobody would ever do something silly like that — the possibility of applying trust heuristics to a bunch of anonymous people writing in a wikiwiki about the sort of stuff that one might look up in an encyclopædia?

                                  zombiecide@polyglot.cityZ This user is from outside of this forum
                                  zombiecide@polyglot.cityZ This user is from outside of this forum
                                  zombiecide@polyglot.city
                                  wrote last edited by
                                  #19

                                  @riley funnily enough, in the time between those first, completely unquantifiable, musings and now many people's trust in a wikiwiki encyclopædia increased a lot, partly maybe because of the transparency of its process, and to another part, maybe due to familiarity? with discussions in the media, at school and workplaces about when and how and what for to trust

                                  1 Reply Last reply
                                  0
                                  • riley@toot.catR riley@toot.cat

                                    @larsmb I'm not entirely sure I understand your point (I might if you fleshed it out some more), but I suspect that a relevant counterpoint you might not have properly considered is, the uncertainty space doesn't have to be flat. It can have an extra axis of plausibility, allowing for fuzzy exclusion of points on it, not just a black-and-white excluded/included binary.

                                    larsmb@mastodon.onlineL This user is from outside of this forum
                                    larsmb@mastodon.onlineL This user is from outside of this forum
                                    larsmb@mastodon.online
                                    wrote last edited by
                                    #20

                                    @riley I blame my undercaffeination, you *did* imclude that via the "if you can't tell" part.

                                    My apologies for a redundant reply.

                                    1 Reply Last reply
                                    0
                                    • riley@toot.catR riley@toot.cat

                                      The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

                                      A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

                                      This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

                                      trimtab@mastodon.socialT This user is from outside of this forum
                                      trimtab@mastodon.socialT This user is from outside of this forum
                                      trimtab@mastodon.social
                                      wrote last edited by
                                      #21

                                      @riley
                                      I love this post, very thought provoking.

                                      As a native English speaker I have never once conceived of the idiom about broken clocks meaning what you say though, regarding gaining knowledge.

                                      In my experience it is used to mean someone/thing is sometimes right, but not from any action they took, rather through luck, error, whatever. They are the broken clock.

                                      I love your take though and the point as a whole.

                                      1 Reply Last reply
                                      0
                                      • missconstrue@mefi.socialM missconstrue@mefi.social

                                        @riley That is such a brilliantly clear analogy.

                                        rachelthornsub@famichiki.jpR This user is from outside of this forum
                                        rachelthornsub@famichiki.jpR This user is from outside of this forum
                                        rachelthornsub@famichiki.jp
                                        wrote last edited by
                                        #22

                                        @MissConstrue @riley What Miss Construe said.

                                        1 Reply Last reply
                                        0
                                        • riley@toot.catR riley@toot.cat

                                          The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.

                                          A clock that always shows the same time is never right, even in the moments of the day when the time happens to be what it shows, because you don't gain any information about what time it is by looking at the clock.

                                          This reasoning also applies to chatbots. If you can't tell whether what you have been given is useful information unless you alreay know the information, then you haven't been given useful information.

                                          M This user is from outside of this forum
                                          M This user is from outside of this forum
                                          modulux@node.isonomia.net
                                          wrote last edited by
                                          #23

                                          @riley That's a very useful angle on it. Where I think this gets interesting is that there's information which is, so to speak, self-certifying. Consider a proof, written in a form that's subject to a deterministic mechanised check. In many ways, it doesn't matter where you got it from: a Ouija board, a demon whispering, hard work, or an LLM. If the proof correctly typechecks, the theorem is true. Now if we consider programs are proofs...

                                          riley@toot.catR 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups