Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Had a lot of fun with my stats students today.

Had a lot of fun with my stats students today.

Scheduled Pinned Locked Moved Uncategorized
112 Posts 62 Posters 20 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • lamecarlate@pouet.itL lamecarlate@pouet.it

    @futurebird @Bumblefish I'm no stats student, so maybe I haven't the bases (for lack of a better term, English is not my main language), but I think listA is the random one. The fact that in the listB there is nearly no triplets seems too good to be true.

    futurebird@sauropods.winF This user is from outside of this forum
    futurebird@sauropods.winF This user is from outside of this forum
    futurebird@sauropods.win
    wrote last edited by
    #95

    @lamecarlate @Bumblefish

    I've got some bad news. I've posted the solution with a CW on the original thread.

    lamecarlate@pouet.itL 1 Reply Last reply
    0
    • ingalovinde@embracing.spaceI ingalovinde@embracing.space

      @AbyssalRook @futurebird I see two mistakes in your reasoning.
      One is technical: events "numbers with position N, N+1 and N+2 are the same" for different values of N are _not_ independent of each other. (For example, if we know that this statement is true for N=10, then there likelihood of it being true for N=11 is 1/6, not 1/36.)
      Another symbolizes a deeper problem with a lot of modern research that relies heavily on p-values: consider how many statements of this kind, containing the same amount of information, could you make? Unless you commit to a specific statement beforehand, before seeing the data: "this statement would only be true in 8% of cases for truly random data" does not really mean anything if it's just one out of 20 equally "interesting" statements one could make about the data (e.g. "how many triplets of incrementing numbers (modulo six) are there", "how many decrementing triplets are there", etc), each only 8% likely. Because of course it is expected that for most random sequences, a few of these individually not very likely statements will be true.

      futurebird@sauropods.winF This user is from outside of this forum
      futurebird@sauropods.winF This user is from outside of this forum
      futurebird@sauropods.win
      wrote last edited by
      #96

      @IngaLovinde @AbyssalRook

      It's been really helpful for me to see how many people focused on the order of the numbers in the list, which I didn't think very important since the list is so short that that type of analysis might not be that useful.

      I used the random list to scramble the fake numbers twice. I should have scrambled them more.

      1 Reply Last reply
      0
      • ingalovinde@embracing.spaceI ingalovinde@embracing.space

        @AbyssalRook @futurebird I see two mistakes in your reasoning.
        One is technical: events "numbers with position N, N+1 and N+2 are the same" for different values of N are _not_ independent of each other. (For example, if we know that this statement is true for N=10, then there likelihood of it being true for N=11 is 1/6, not 1/36.)
        Another symbolizes a deeper problem with a lot of modern research that relies heavily on p-values: consider how many statements of this kind, containing the same amount of information, could you make? Unless you commit to a specific statement beforehand, before seeing the data: "this statement would only be true in 8% of cases for truly random data" does not really mean anything if it's just one out of 20 equally "interesting" statements one could make about the data (e.g. "how many triplets of incrementing numbers (modulo six) are there", "how many decrementing triplets are there", etc), each only 8% likely. Because of course it is expected that for most random sequences, a few of these individually not very likely statements will be true.

        abyssalrook@mstdn.socialA This user is from outside of this forum
        abyssalrook@mstdn.socialA This user is from outside of this forum
        abyssalrook@mstdn.social
        wrote last edited by
        #97

        @IngaLovinde I'm not following the first problem in the logic. The situation you're describing might be important if we're looking at more and more instances of it happening, but looking at it happening at least once (~94%) doesn't change at all, and it happening ONLY once might jiggle the ~8% estimate I had, but not significantly move it.

        abyssalrook@mstdn.socialA ingalovinde@embracing.spaceI 2 Replies Last reply
        0
        • flockofcats@famichiki.jpF This user is from outside of this forum
          flockofcats@famichiki.jpF This user is from outside of this forum
          flockofcats@famichiki.jp
          wrote last edited by
          #98

          @Bumblefish @futurebird
          That was an interesting thread. Our brains are wired to think certain things are “random” when they’re not, so when people try to create something that looks random, they often avoid repeated numbers, even though there’d be repeats, if truly random, with some expected frequency. Also, odd numbers are often overrepresented cuz they feel more random, e.g., 5973 vs 6084. This “ looks random, but isn’t” often comes up when people fabricate scientific data 🤓

          1 Reply Last reply
          0
          • abyssalrook@mstdn.socialA abyssalrook@mstdn.social

            @IngaLovinde I'm not following the first problem in the logic. The situation you're describing might be important if we're looking at more and more instances of it happening, but looking at it happening at least once (~94%) doesn't change at all, and it happening ONLY once might jiggle the ~8% estimate I had, but not significantly move it.

            abyssalrook@mstdn.socialA This user is from outside of this forum
            abyssalrook@mstdn.socialA This user is from outside of this forum
            abyssalrook@mstdn.social
            wrote last edited by
            #99

            @IngaLovinde As for the latter, that is entirely true from a research perspective, but I picked the 3-of-a-kind pattern because I assumed the non-random list was entirely human constructed, and that particular pattern is one that sticks out to us the most. Someone making a list by hand is more likely to see "6-6-6" as less random than "6-1-2" or "3-4-5".

            I did not clock 'Which is random?' as one being a dice roll and the other being a shuffled deck of prescribed cards.

            1 Reply Last reply
            0
            • futurebird@sauropods.winF futurebird@sauropods.win

              ListA was created by making a list of 16 or 17 of each number. The Stdev **of the frequencies** is much lower than what you will find on random lists of similar size.

              ListB was made by rolling dice.

              fsologureng@chilemasto.casaF This user is from outside of this forum
              fsologureng@chilemasto.casaF This user is from outside of this forum
              fsologureng@chilemasto.casa
              wrote last edited by
              #100

              @futurebird listA has the subsequence 1,1,1,6,1,4 repeated twice at very short distance between them, which is, while plausible, extremely improbable. That's the way I found it's crafted.

              1 Reply Last reply
              0
              • futurebird@sauropods.winF futurebird@sauropods.win

                There is something very creepy about the way LLMs willy cheerfully give lists of "random" numbers. But they aren't random in frequency, and as my students pointed out "it's probably from some webpage about how to generate random numbers"

                But even then, why is the frequency so unnaturally regular? Is that an artifact from mixing lists of real random numbers together?

                demfighter@mas.toD This user is from outside of this forum
                demfighter@mas.toD This user is from outside of this forum
                demfighter@mas.to
                wrote last edited by
                #101

                @futurebird In essence, an LLM is nothing more than a glorified and dumbed down search engine.

                Instead of producing a set of hyperlinks like a normal search engine would, the algorithm takes excerpts from the sources with the highest "relevance" value. The output is formatted to look like pseudo-speech for no apparent reason.

                The end result is never better than the traditional search results, which may or may not be useful. The only thing the LLMs are good at is wasting electricity.

                1 Reply Last reply
                0
                • abyssalrook@mstdn.socialA abyssalrook@mstdn.social

                  @IngaLovinde I'm not following the first problem in the logic. The situation you're describing might be important if we're looking at more and more instances of it happening, but looking at it happening at least once (~94%) doesn't change at all, and it happening ONLY once might jiggle the ~8% estimate I had, but not significantly move it.

                  ingalovinde@embracing.spaceI This user is from outside of this forum
                  ingalovinde@embracing.spaceI This user is from outside of this forum
                  ingalovinde@embracing.space
                  wrote last edited by
                  #102

                  @AbyssalRook okay let's calculate it:
                  Let a_n be the probability that the sequence of length n does not contain triplets of identical numbers, and does not end with two same numbers; b_n, the same, but ends with two same numbers.
                  Then a_1 = 1, a_2 = 5/6, b_2 = 1/6; a_(n+1) = a_n * 5/6 + b_n * 5/6; b_(n+1) = a_n * 1/6.
                  Or, expanding b_n, we get a_(n+2) = a_(n+1) * 5/6 + a_n * 5/36.
                  Plugging these numbers into Wolfram alpha (`LinearRecurrence[{5/6, 5/36}, {1, 5/6}, 100]`), we obtain a_100 ~= 0.0762866, a_99 ~= 0.0781878, and therefore the probability that the sequence of 100 random numbers does not contain triplets of the same number is a_100 + a_99/6 ~= 0.0893 = 8.93%.

                  By contrast, the probability that out of 98 random (and independent) triplets none will consist of three same numbers is (35/36)^98 ~= 6.32%.

                  That's a pretty large difference, and not just a jiggle.

                  (I understand that this is not the number you were looking at, but it's the easiest way to illustrate that there is a significant difference between answering questions about triplets of repeating number among 98 independent random triplets and among 98 sub-triplets of the sequence with 100 independent random numbers.)

                  1 Reply Last reply
                  0
                  • meuwese@mastodon.socialM meuwese@mastodon.social

                    @ai6yr @ohmu @futurebird wait so... is that the ultimate question? "What number will an LLM always include when generating random numbers?"

                    ai6yr@m.ai6yr.orgA This user is from outside of this forum
                    ai6yr@m.ai6yr.orgA This user is from outside of this forum
                    ai6yr@m.ai6yr.org
                    wrote last edited by
                    #103

                    @meuwese @ohmu @futurebird Apparently humans have willed that into existence, yes. LOL. (err... Douglas Adams, precisely)

                    1 Reply Last reply
                    0
                    • futurebird@sauropods.winF futurebird@sauropods.win

                      @ramsey @Bumblefish

                      Only one of these lists could *plausibly* be from rolling dice.

                      ldpm@wandering.shopL This user is from outside of this forum
                      ldpm@wandering.shopL This user is from outside of this forum
                      ldpm@wandering.shop
                      wrote last edited by
                      #104

                      @futurebird @ramsey @Bumblefish this is not remotely my area of expertise but I am interested in the answer. My guess would be that the list that looks more evenly distributed is the fake one, and therefore List A is the "actually random" one because it has more seemingly outlying subsets, like a whole bunch of 1s in rapid succession.

                      There are tons of ways to unevenly distribute but relatively few ways to evenly distribute, so the one that seems less even is more likely to be true

                      ldpm@wandering.shopL 1 Reply Last reply
                      0
                      • ldpm@wandering.shopL ldpm@wandering.shop

                        @futurebird @ramsey @Bumblefish this is not remotely my area of expertise but I am interested in the answer. My guess would be that the list that looks more evenly distributed is the fake one, and therefore List A is the "actually random" one because it has more seemingly outlying subsets, like a whole bunch of 1s in rapid succession.

                        There are tons of ways to unevenly distribute but relatively few ways to evenly distribute, so the one that seems less even is more likely to be true

                        ldpm@wandering.shopL This user is from outside of this forum
                        ldpm@wandering.shopL This user is from outside of this forum
                        ldpm@wandering.shop
                        wrote last edited by
                        #105

                        @futurebird @ramsey @Bumblefish also I suspect maybe a Monty Hall kind of thing where you generated a bunch of random lists, and then selected the one that looked least random to you to trick your students.

                        I'd love to know what the actual answer is and what you were hoping to teach your students!

                        futurebird@sauropods.winF 1 Reply Last reply
                        0
                        • futurebird@sauropods.winF futurebird@sauropods.win

                          The LLM is like a little box of computer horrors that we peer into from time to time.

                          I'm sorry but the whole interface is just so silly.

                          You ask for random numbers with sentences and it pretends to give them to you? What are we doooooing?

                          raffzahn@mastodon.bayernR This user is from outside of this forum
                          raffzahn@mastodon.bayernR This user is from outside of this forum
                          raffzahn@mastodon.bayern
                          wrote last edited by
                          #106

                          @futurebird

                          "What are we doooooing?"

                          Well, we've taken the sound algorithm of a brabbling baby, supercharged by a huge library of words annotated by possibility of sequence and now management is jumping around like parents bragging what a genius their 11 month old is. All because WE try to find meaning in the perceived word sequence.

                          Same management that brags about 1400% lower prices :))

                          1 Reply Last reply
                          0
                          • dpiponi@mathstodon.xyzD dpiponi@mathstodon.xyz

                            @futurebird It's very weird.

                            In principle, if you take an LLM, you should be able to get it to generate random numbers in a way that reflects the numbers that appear in the corpus it was trained on. If you have the raw model you can probably do that.

                            But if you ask ChatGPT (or at least if I do) it starts talking about how numbers taken from around us typically follow Benford's law so their first digits have a logarithmic distribution. When it then spits out some random numbers it's no longer sampling random numbers from the entire corpus but a sample that's probably heavily biased towards numbers that appear in articles about Benford's law. I.e. what people have previously said about these numbers, rather than the actual numbers.

                            raffzahn@mastodon.bayernR This user is from outside of this forum
                            raffzahn@mastodon.bayernR This user is from outside of this forum
                            raffzahn@mastodon.bayern
                            wrote last edited by
                            #107

                            @dpiponi @futurebird

                            Which in turn is what LLM do. They give an averaged output, not a reasoned.

                            In addition the inherent laws of measurement and control define that any reached output will never met the intended. Thus LLM output will never increase knowledge, but migrate toward zero.

                            1 Reply Last reply
                            0
                            • futurebird@sauropods.winF futurebird@sauropods.win

                              There is something very creepy about the way LLMs willy cheerfully give lists of "random" numbers. But they aren't random in frequency, and as my students pointed out "it's probably from some webpage about how to generate random numbers"

                              But even then, why is the frequency so unnaturally regular? Is that an artifact from mixing lists of real random numbers together?

                              petabites@mastodon.worldP This user is from outside of this forum
                              petabites@mastodon.worldP This user is from outside of this forum
                              petabites@mastodon.world
                              wrote last edited by
                              #108

                              @futurebird

                              and how about those "random" passwords generated by AI 😬

                              https://zeroes.ca/@kimcrawley/116099905667994600

                              * over and over, again. #PasswordReuse #VibeSlop

                              futurebird@sauropods.winF 1 Reply Last reply
                              0
                              • petabites@mastodon.worldP petabites@mastodon.world

                                @futurebird

                                and how about those "random" passwords generated by AI 😬

                                https://zeroes.ca/@kimcrawley/116099905667994600

                                * over and over, again. #PasswordReuse #VibeSlop

                                futurebird@sauropods.winF This user is from outside of this forum
                                futurebird@sauropods.winF This user is from outside of this forum
                                futurebird@sauropods.win
                                wrote last edited by
                                #109

                                @petabites

                                This is what inspired the whole lesson. I had to show them this.

                                1 Reply Last reply
                                0
                                • ldpm@wandering.shopL ldpm@wandering.shop

                                  @futurebird @ramsey @Bumblefish also I suspect maybe a Monty Hall kind of thing where you generated a bunch of random lists, and then selected the one that looked least random to you to trick your students.

                                  I'd love to know what the actual answer is and what you were hoping to teach your students!

                                  futurebird@sauropods.winF This user is from outside of this forum
                                  futurebird@sauropods.winF This user is from outside of this forum
                                  futurebird@sauropods.win
                                  wrote last edited by
                                  #110

                                  @ldpm @ramsey @Bumblefish

                                  I put the answer in the original thread with a CW. This was about frequency.

                                  1 Reply Last reply
                                  0
                                  • futurebird@sauropods.winF futurebird@sauropods.win

                                    @lamecarlate @Bumblefish

                                    I've got some bad news. I've posted the solution with a CW on the original thread.

                                    lamecarlate@pouet.itL This user is from outside of this forum
                                    lamecarlate@pouet.itL This user is from outside of this forum
                                    lamecarlate@pouet.it
                                    wrote last edited by
                                    #111

                                    @futurebird @Bumblefish Yep, I read it… My bad. I used instinct, guts, not mathematics like the other answers. I should have 😅

                                    1 Reply Last reply
                                    0
                                    • ldpm@wandering.shopL ldpm@wandering.shop

                                      @futurebird I know how to find the SD and I will use the php-stats library every day of the week and twice on Sunday. I would much rather be able to depend on well supported community code. (At least until it is all replaced by ai slop)

                                      futurebird@sauropods.winF This user is from outside of this forum
                                      futurebird@sauropods.winF This user is from outside of this forum
                                      futurebird@sauropods.win
                                      wrote last edited by
                                      #112

                                      @ldpm

                                      I don't mind using libraries, but it's fun to write my own versions of things just so I know how they work.

                                      When we make projects where we share code I encourage them to use libraries more often. I'm just a grumpy old lady about it sometimes.

                                      1 Reply Last reply
                                      0
                                      • em0nm4stodon@infosec.exchangeE em0nm4stodon@infosec.exchange shared this topic
                                      Reply
                                      • Reply as topic
                                      Log in to reply
                                      • Oldest to Newest
                                      • Newest to Oldest
                                      • Most Votes


                                      • Login

                                      • Login or register to search.
                                      • First post
                                        Last post
                                      0
                                      • Categories
                                      • Recent
                                      • Tags
                                      • Popular
                                      • World
                                      • Users
                                      • Groups