Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. "I just found out that it's been hallucinating numbers this entire time."

"I just found out that it's been hallucinating numbers this entire time."

Scheduled Pinned Locked Moved Uncategorized
techagenticai
157 Posts 132 Posters 9 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • quantillion@mstdn.ioQ quantillion@mstdn.io

    @Natasha_Jay
    Well, AI for professionals & experts is a tool for the expert who is in charge and responsible.
    The kind of use of AI described here is for *general public* AI, i.e. where the user has no idea how correct it is & shouldn't even have to care as long as it is reasonably plausible.
    Professionals, experts & businesses can NEVER blame the "AI" for the hallucinations they take as truth. 🤷

    toriver@mas.toT This user is from outside of this forum
    toriver@mas.toT This user is from outside of this forum
    toriver@mas.to
    wrote last edited by
    #98

    @Quantillion @Natasha_Jay No, an LLM is a toddler that has been reading a lot of books but don’t understand any of them and just likes words that are next to other words, and then you need to be very precise and provide a lot of details in your questions to make it answer anything close to correct, and the next time you ask the same thing the answer is probably different.

    But yes, the user bears responsibility as the adult in the relationship.

    jernej__s@infosec.exchangeJ 1 Reply Last reply
    0
    • cgudrian@social.tchncs.deC This user is from outside of this forum
      cgudrian@social.tchncs.deC This user is from outside of this forum
      cgudrian@social.tchncs.de
      wrote last edited by
      #99

      @Natasha_Jay Sounds plausible.

      1 Reply Last reply
      0
      • post_reader@wehavecookies.socialP This user is from outside of this forum
        post_reader@wehavecookies.socialP This user is from outside of this forum
        post_reader@wehavecookies.social
        wrote last edited by
        #100

        @Natasha_Jay
        He he they are using randomization to make chat bot look more "intelligent"
        See this:
        https://towardsdatascience.com/llms-are-randomized-algorithms/

        1 Reply Last reply
        0
        • canleaf@mastodon.socialC This user is from outside of this forum
          canleaf@mastodon.socialC This user is from outside of this forum
          canleaf@mastodon.social
          wrote last edited by
          #101

          @Natasha_Jay Vibe work is not work.

          1 Reply Last reply
          0
          • davidr@hachyderm.ioD davidr@hachyderm.io

            @Epic_Null @Natasha_Jay That's bad, but honestly--switching to a new system without ever double checking anything?

            Everyone involved should be fired, including the #AI

            D This user is from outside of this forum
            D This user is from outside of this forum
            drchaos@sauropods.win
            wrote last edited by
            #102

            @davidr @Epic_Null @Natasha_Jay Fire the bloody management. They keep pushing for "use more AI". If you don't, you are considered to not be a team player, be obstructive, hinder the company and all these things.

            1 Reply Last reply
            0
            • taurus@thicc.horseT taurus@thicc.horse

              @Epic_Null @Natasha_Jay @davidr yeah it's a system failure.
              the failure is so bad you need to investigate how such a bad decision could have ever been made and you need to change your process

              overtondoors@infosec.exchangeO This user is from outside of this forum
              overtondoors@infosec.exchangeO This user is from outside of this forum
              overtondoors@infosec.exchange
              wrote last edited by
              #103

              @taurus @Epic_Null @Natasha_Jay @davidr but... But... That would lead right up to the board of directors and shareholders. These people are by definition faultless. The eminent purpose of a corporation is to extract wealth without consequences reaching this select group of shitheads.

              1 Reply Last reply
              0
              • float13@masto.hackers.townF This user is from outside of this forum
                float13@masto.hackers.townF This user is from outside of this forum
                float13@masto.hackers.town
                wrote last edited by
                #104

                @Natasha_Jay

                1 Reply Last reply
                0
                • soulshine@mastodon.socialS This user is from outside of this forum
                  soulshine@mastodon.socialS This user is from outside of this forum
                  soulshine@mastodon.social
                  wrote last edited by
                  #105

                  @Natasha_Jay

                  I don't get it. How did people immediately trust AI as soon as our fascist techbro overlords ordered us to?
                  Most of our friends ask chat GPT for all their important life decisions now.
                  It takes extremely obvious fuckups like the Flock Superbowl ad to make people pause for a second. Usually, we gobble up whatever the oligarchs ram down our throats.

                  1 Reply Last reply
                  0
                  • nerde@beige.partyN nerde@beige.party

                    @Kierkegaanks @Natasha_Jay
                    This proves AI can replace CFOs and CEOs!

                    rmhogervorst@friendsofdesoto.socialR This user is from outside of this forum
                    rmhogervorst@friendsofdesoto.socialR This user is from outside of this forum
                    rmhogervorst@friendsofdesoto.social
                    wrote last edited by
                    #106

                    @Nerde

                    @Kierkegaanks @Natasha_Jay I often think the only people AI can actually replace are CEOs . Waxing about vision, constructing strategies without actual content. No concern for actual truth.

                    nini@oldbytes.spaceN 1 Reply Last reply
                    0
                    • nerde@beige.partyN nerde@beige.party

                      @Kierkegaanks @Natasha_Jay
                      This proves AI can replace CFOs and CEOs!

                      andreas_tengicki@hessen.socialA This user is from outside of this forum
                      andreas_tengicki@hessen.socialA This user is from outside of this forum
                      andreas_tengicki@hessen.social
                      wrote last edited by
                      #107

                      @Nerde @Kierkegaanks @Natasha_Jay That are the easiest to replace people in most bigger companies.

                      1 Reply Last reply
                      0
                      • bnlandor@mastodon.socialB bnlandor@mastodon.social

                        @Natasha_Jay How does anyone think LLMs base anything on facts or data? They are plausabiliy machines, designed to flood the zone.

                        rozeboosje@masto.aiR This user is from outside of this forum
                        rozeboosje@masto.aiR This user is from outside of this forum
                        rozeboosje@masto.ai
                        wrote last edited by
                        #108

                        @bnlandor @Natasha_Jay

                        Facts, no. But data, of course. Tons and tons of data, with no ability whatsoever to determine the quality of those data. LLMs learn how *these* kinds of data lead to *those* kinds of output, and that is what they do. They have no way of knowing whether output makes sense, whether it's correct or not, whether it's accurate or not. But they WILL spew out their output with an air of total confidence.

                        bnlandor@mastodon.socialB 1 Reply Last reply
                        0
                        • rozeboosje@masto.aiR rozeboosje@masto.ai

                          @bnlandor @Natasha_Jay

                          Facts, no. But data, of course. Tons and tons of data, with no ability whatsoever to determine the quality of those data. LLMs learn how *these* kinds of data lead to *those* kinds of output, and that is what they do. They have no way of knowing whether output makes sense, whether it's correct or not, whether it's accurate or not. But they WILL spew out their output with an air of total confidence.

                          bnlandor@mastodon.socialB This user is from outside of this forum
                          bnlandor@mastodon.socialB This user is from outside of this forum
                          bnlandor@mastodon.social
                          wrote last edited by
                          #109

                          @rozeboosje @Natasha_Jay the difference between "(actual) data", aka facts, and "types of data" doing the heavy lifting here. Any data it learns from is a placeholder for the shape of data to use, so it can randomize it freely.

                          That's the very reason LLMs cannot count the number of vowels in a word. They "know" the expected answer is a low integer (type of data), but have no clue about the actual value (data).

                          1 Reply Last reply
                          0
                          • nausipoule@mamot.frN This user is from outside of this forum
                            nausipoule@mamot.frN This user is from outside of this forum
                            nausipoule@mamot.fr
                            wrote last edited by
                            #110

                            @Natasha_Jay "I asked the automatic parrot who makes narrative stories to do my strategic decisions. Guess what, it produced narrative stories.

                            We are still investigating why the automatic parrot made to generate narrative stories does in fact generate narrative stories."

                            1 Reply Last reply
                            0
                            • toriver@mas.toT toriver@mas.to

                              @Quantillion @Natasha_Jay No, an LLM is a toddler that has been reading a lot of books but don’t understand any of them and just likes words that are next to other words, and then you need to be very precise and provide a lot of details in your questions to make it answer anything close to correct, and the next time you ask the same thing the answer is probably different.

                              But yes, the user bears responsibility as the adult in the relationship.

                              jernej__s@infosec.exchangeJ This user is from outside of this forum
                              jernej__s@infosec.exchangeJ This user is from outside of this forum
                              jernej__s@infosec.exchange
                              wrote last edited by
                              #111

                              @toriver @Quantillion @Natasha_Jay Just say "Are you sure" after it generates the answer, and it'll generate an opposite answer immediately.

                              1 Reply Last reply
                              0
                              • machinelordzero@mastodon.socialM This user is from outside of this forum
                                machinelordzero@mastodon.socialM This user is from outside of this forum
                                machinelordzero@mastodon.social
                                wrote last edited by
                                #112

                                @Natasha_Jay "hallucinating" is such a bad term for "making things up".

                                1 Reply Last reply
                                0
                                • coderjason@mastodon.socialC This user is from outside of this forum
                                  coderjason@mastodon.socialC This user is from outside of this forum
                                  coderjason@mastodon.social
                                  wrote last edited by
                                  #113

                                  @Natasha_Jay Well, they got what they deserved. "What do you mean, you didn't read it?"

                                  I'm cheering for all the sceptics that said, "let's wait and see how all this #AI stuff pans out." I love using our new dev tools, they are nice. But, they aren't what #marketing teams are claiming, nor what fanboys are promising. We now have, what we've been promised ages ago. #features have finally been delivered. Delayed, but here now.

                                  Anyway, please continue ...

                                  1 Reply Last reply
                                  0
                                  • poslovitch@wikis.worldP This user is from outside of this forum
                                    poslovitch@wikis.worldP This user is from outside of this forum
                                    poslovitch@wikis.world
                                    wrote last edited by
                                    #114

                                    @Natasha_Jay I can't find the thread on Reddit, I'd have loved to read some of the comments to see if they praise AI nonetheless

                                    1 Reply Last reply
                                    0
                                    • rmhogervorst@friendsofdesoto.socialR rmhogervorst@friendsofdesoto.social

                                      @Nerde

                                      @Kierkegaanks @Natasha_Jay I often think the only people AI can actually replace are CEOs . Waxing about vision, constructing strategies without actual content. No concern for actual truth.

                                      nini@oldbytes.spaceN This user is from outside of this forum
                                      nini@oldbytes.spaceN This user is from outside of this forum
                                      nini@oldbytes.space
                                      wrote last edited by
                                      #115

                                      @rmhogervorst Decent amount of middle and upper management too.

                                      1 Reply Last reply
                                      0
                                      • ktneely@infosec.exchangeK ktneely@infosec.exchange

                                        @lxskllr @GreatBigTable meh, they've probably been fabricating data for the board long before generative AI hit the scene. The only difference is that now they have a scape goat.

                                        G This user is from outside of this forum
                                        G This user is from outside of this forum
                                        gerardthornley@hachyderm.io
                                        wrote last edited by
                                        #116

                                        @ktneely @lxskllr @GreatBigTable
                                        I think AI is technically a scrape goat. 😀

                                        cybervegan@autistics.lifeC 1 Reply Last reply
                                        0
                                        • A This user is from outside of this forum
                                          A This user is from outside of this forum
                                          azurearmageddon@mastodon.online
                                          wrote last edited by
                                          #117

                                          @Natasha_Jay On one hand, yes a lot of technically-minded people saw this coming a couple odd lightyears away, but on the other hand I would love someone paid by this company to continue digging and write an utterly scathing report detailing the nature and extent of the misrepresentations made by the product and sue the vendor for relying on a misunderstanding the product's capabilities for sales (however successful it may be).

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups