Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. "I just found out that it's been hallucinating numbers this entire time."

"I just found out that it's been hallucinating numbers this entire time."

Scheduled Pinned Locked Moved Uncategorized
techagenticai
157 Posts 132 Posters 9 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • ghostonthehalfshell@masto.aiG This user is from outside of this forum
    ghostonthehalfshell@masto.aiG This user is from outside of this forum
    ghostonthehalfshell@masto.ai
    wrote last edited by
    #27

    @dropbear @Natasha_Jay

    SHOCKED I TELL YOU

    1 Reply Last reply
    0
    • xethos@mastodon.xethos.netX This user is from outside of this forum
      xethos@mastodon.xethos.netX This user is from outside of this forum
      xethos@mastodon.xethos.net
      wrote last edited by
      #28

      @Natasha_Jay "Here's what a functional company's numbers would look like"

      Unfortunately, these are not *your* company's numbers

      1 Reply Last reply
      0
      • foolishowl@social.coopF This user is from outside of this forum
        foolishowl@social.coopF This user is from outside of this forum
        foolishowl@social.coop
        wrote last edited by
        #29

        @Natasha_Jay This offers a glimmer of hope that the managerial class may self-destruct without doing too much more damage to the rest of us.

        1 Reply Last reply
        0
        • ghostonthehalfshell@masto.aiG This user is from outside of this forum
          ghostonthehalfshell@masto.aiG This user is from outside of this forum
          ghostonthehalfshell@masto.ai
          wrote last edited by
          #30

          @dropbear @Natasha_Jay

          By the way, always delighted to see what is to my understanding a classic prank Auzzies play on tourists

          whombex@bne.socialW subplotkudzu@dice.campS 2 Replies Last reply
          0
          • ferrix@mastodon.onlineF This user is from outside of this forum
            ferrix@mastodon.onlineF This user is from outside of this forum
            ferrix@mastodon.online
            wrote last edited by
            #31

            @Natasha_Jay @wendynather the "making up plausible shit" machine once again made up plausible shit, continually surprising everyone

            maccruiskeen@social.linux.pizzaM 1 Reply Last reply
            0
            • craigduncan@mastodon.auC This user is from outside of this forum
              craigduncan@mastodon.auC This user is from outside of this forum
              craigduncan@mastodon.au
              wrote last edited by
              #32

              @Natasha_Jay

              Source is here for anyone interested in it

              Link Preview Image

              favicon

              (www.reddit.com)

              gray17@mastodon.socialG 1 Reply Last reply
              0
              • ferrix@mastodon.onlineF ferrix@mastodon.online

                @Natasha_Jay @wendynather the "making up plausible shit" machine once again made up plausible shit, continually surprising everyone

                maccruiskeen@social.linux.pizzaM This user is from outside of this forum
                maccruiskeen@social.linux.pizzaM This user is from outside of this forum
                maccruiskeen@social.linux.pizza
                wrote last edited by
                #33

                @ferrix @Natasha_Jay @wendynather "Making up plausible shit" is a human consultant's job! AI is going to replace us!

                ferrix@mastodon.onlineF 1 Reply Last reply
                0
                • cptsuperlative@toot.catC This user is from outside of this forum
                  cptsuperlative@toot.catC This user is from outside of this forum
                  cptsuperlative@toot.cat
                  wrote last edited by
                  #34

                  @Natasha_Jay

                  No one should be surprised.

                  It is mathematically impossible to stop an LLM from “hallucinating” because “hallucinating” is what LLMs are doing 100% of the time.

                  It’s only human beings who distinguish (ideally) between correct and incorrect synthetic text.

                  And yet, it’s like forbidden candy to a child. Even well educated, thoughtful people so desperately want to believe that this tech “works”

                  1 Reply Last reply
                  0
                  • maccruiskeen@social.linux.pizzaM maccruiskeen@social.linux.pizza

                    @ferrix @Natasha_Jay @wendynather "Making up plausible shit" is a human consultant's job! AI is going to replace us!

                    ferrix@mastodon.onlineF This user is from outside of this forum
                    ferrix@mastodon.onlineF This user is from outside of this forum
                    ferrix@mastodon.online
                    wrote last edited by
                    #35

                    @maccruiskeen @Natasha_Jay @wendynather MUPSaaS

                    1 Reply Last reply
                    0
                    • nantucketlit@mastodon.socialN This user is from outside of this forum
                      nantucketlit@mastodon.socialN This user is from outside of this forum
                      nantucketlit@mastodon.social
                      wrote last edited by
                      #36

                      @Natasha_Jay "To an artificial mind, all reality is virtual."

                      1 Reply Last reply
                      0
                      • 0x0ddc0ffee@infosec.exchange0 0x0ddc0ffee@infosec.exchange

                        @Natasha_Jay I suspect that one of the few applications where a heavily hallucinating LLM can outperform a human would be in replacing board members, C-suite executives, and their direct reports. I propose a longitudinal study with a control group of high level executives using real data, and experimental group using hallucinated, or maybe even totally random data filtered into plausible ranges, using executive compensation deltas as the metric.

                        npars01@mstdn.socialN This user is from outside of this forum
                        npars01@mstdn.socialN This user is from outside of this forum
                        npars01@mstdn.social
                        wrote last edited by
                        #37

                        @0x0ddc0ffee @Natasha_Jay

                        Remember who funds Ai.
                        https://www.al-monitor.com/originals/2024/05/saudi-prince-alwaleed-bin-talal-invests-elon-musks-24b-ai-startup

                        https://www.washingtonpost.com/technology/2025/05/13/trump-tech-execs-riyadh/

                        forbes.com

                        favicon

                        (www.forbes.com)

                        forbes.com

                        favicon

                        (www.forbes.com)

                        Disaster Capitalists -- they want another bubble bursting that sends another generation of wealth upwards to the 1%

                        Global financial crashes & mass dissatisfaction provides the makings for fascist movements & profitable imperialist wars of extraction.

                        https://www.taxresearch.org.uk/Blog/2025/09/03/austerity-is-the-midwife-of-fascism/

                        Just a moment...

                        favicon

                        (www.euractiv.com)

                        Link Preview Image
                        What will it take to beat the far right?

                        The far right feeds on economic despair and exclusion. Only policies that deliver security, dignity and belonging can starve it

                        favicon

                        (www.ips-journal.eu)

                        1 Reply Last reply
                        0
                        • jkrotkov@mastodon.socialJ This user is from outside of this forum
                          jkrotkov@mastodon.socialJ This user is from outside of this forum
                          jkrotkov@mastodon.social
                          wrote last edited by
                          #38

                          @Natasha_Jay

                          Hallucinating numbers, you say?

                          It must be learning from the current American administration..

                          1 Reply Last reply
                          0
                          • countholdem@mastodon.socialC This user is from outside of this forum
                            countholdem@mastodon.socialC This user is from outside of this forum
                            countholdem@mastodon.social
                            wrote last edited by
                            #39

                            @Natasha_Jay #BoycottAI

                            1 Reply Last reply
                            0
                            • only_exception@mstdn.plusO This user is from outside of this forum
                              only_exception@mstdn.plusO This user is from outside of this forum
                              only_exception@mstdn.plus
                              wrote last edited by
                              #40

                              @Natasha_Jay someone is getting fired

                              1 Reply Last reply
                              0
                              • stefanol@mstdn.caS This user is from outside of this forum
                                stefanol@mstdn.caS This user is from outside of this forum
                                stefanol@mstdn.ca
                                wrote last edited by
                                #41

                                @Natasha_Jay 😬

                                1 Reply Last reply
                                0
                                • drahardja@sfba.socialD drahardja@sfba.social

                                  @Natasha_Jay This part in the original post is fantastic:

                                  “The worst part I raised concerns about needing validation in November and got told I was slowing down innovation.”

                                  hxxps://www.reddit.com/r/analytics/comments/1r4dsq2/we_just_found_out_our_ai_has_been_making_up/

                                  indigoviolet@tech.lgbtI This user is from outside of this forum
                                  indigoviolet@tech.lgbtI This user is from outside of this forum
                                  indigoviolet@tech.lgbt
                                  wrote last edited by
                                  #42

                                  @drahardja @Natasha_Jay I believe it’s supposed to be h*tt*ps

                                  drahardja@sfba.socialD 1 Reply Last reply
                                  0
                                  • iveyline@mastodon.socialI This user is from outside of this forum
                                    iveyline@mastodon.socialI This user is from outside of this forum
                                    iveyline@mastodon.social
                                    wrote last edited by
                                    #43

                                    @Natasha_Jay It seems to me that AI tools have been released prematurely to generate revenue from the massive investments AI tech companies are making. This type of hallucination could cause serious damage to any business, organisation or person using it. Who is accountable in the end? Given the history of big tech's social media platforms and the harm theyare causing you can bet they'll accept no responsibility.

                                    1 Reply Last reply
                                    0
                                    • jmcclure@sciences.socialJ This user is from outside of this forum
                                      jmcclure@sciences.socialJ This user is from outside of this forum
                                      jmcclure@sciences.social
                                      wrote last edited by
                                      #44

                                      @Natasha_Jay

                                      "... just inventing plausible sounding [answers]"

                                      This shit is so tiring - that is literally all any AI is even *meant* to do. They are not even designed to give correct answers to questions, but just examples of what a plausible answer could sound like.

                                      (edit: sorry, I know I'm likely preaching to the choir here, but it's just so fucking tiring seeing people surprised by this crap.)

                                      greatbigtable@mastodon.socialG 1 Reply Last reply
                                      0
                                      • ai6yr@m.ai6yr.orgA This user is from outside of this forum
                                        ai6yr@m.ai6yr.orgA This user is from outside of this forum
                                        ai6yr@m.ai6yr.org
                                        wrote last edited by
                                        #45

                                        @Natasha_Jay Bwahahahahahaha

                                        1 Reply Last reply
                                        0
                                        • erikjonker@mastodon.socialE This user is from outside of this forum
                                          erikjonker@mastodon.socialE This user is from outside of this forum
                                          erikjonker@mastodon.social
                                          wrote last edited by
                                          #46

                                          @Natasha_Jay .. and you really want to blame the technology for this... If there was no process for checking the facts that were used , it is simply bad implementation, everyone knows you have to check for or do something against hallucinations with GenAI.

                                          greatbigtable@mastodon.socialG 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups