Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. "I just found out that it's been hallucinating numbers this entire time."

"I just found out that it's been hallucinating numbers this entire time."

Scheduled Pinned Locked Moved Uncategorized
techagenticai
157 Posts 132 Posters 6 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • maniaclives@mastodon.worldM maniaclives@mastodon.world

    @sodiboo @drahardja @indigoviolet the reason I know of for breaking the link (by using hxxp) is when you don’t want to contribute referrer data to the linked site. Reddit doesn’t need to know we’re talking about them.

    taurus@thicc.horseT This user is from outside of this forum
    taurus@thicc.horseT This user is from outside of this forum
    taurus@thicc.horse
    wrote last edited by
    #90

    @sodiboo @drahardja @indigoviolet @maniaclives i usually just put a space at an obvious location
    otherwise readers might spend forever looking for the mistake

    1 Reply Last reply
    0
    • nickboss@defcon.socialN nickboss@defcon.social

      @Natasha_Jay

      The only use of ai my company has successfully implemented is having it write emails to dead leads.

      quantillion@mstdn.ioQ This user is from outside of this forum
      quantillion@mstdn.ioQ This user is from outside of this forum
      quantillion@mstdn.io
      wrote last edited by
      #91

      @NickBoss @Natasha_Jay
      Oh jeez, is THAT where all my SPAM is coming from?!!?

      1 Reply Last reply
      0
      • taurus@thicc.horseT taurus@thicc.horse

        @Natasha_Jay how did they caught it by accident? Isn't that one of the first steps you do with data to check if it actually works?
        So they could have just hired some random who will throw dice and even that would be a more sound business decision because at least then you have someone who takes accountability?
        All these people worked with the data and not once did it click that the data doesn't align with their previous data?

        Something about AI makes people turn off their brain

        G This user is from outside of this forum
        G This user is from outside of this forum
        goedelchen@mastodontech.de
        wrote last edited by
        #92

        @taurus @Natasha_Jay Wait, "plausible sounding percentages" is not enough checking?

        taurus@thicc.horseT 1 Reply Last reply
        0
        • fosstastic@mastodon.socialF This user is from outside of this forum
          fosstastic@mastodon.socialF This user is from outside of this forum
          fosstastic@mastodon.social
          wrote last edited by
          #93

          @Natasha_Jay This really is a major issue with #AI.

          Even if you use things like #Gemini "Deep Research", it still loves to make things up it couldn't research properly. For instance, I once tried to use it for researching axle ratios of cars and every time I've repeated the same request, it came up with different numbers.

          (It can come up with decent results though for topics where lots of scientific papers are available, like life cycle emissions of vehicles with different propulsion types.)

          1 Reply Last reply
          0
          • drahardja@sfba.socialD drahardja@sfba.social

            @stragu @Natasha_Jay No idea.

            felipe@social.treehouse.systemsF This user is from outside of this forum
            felipe@social.treehouse.systemsF This user is from outside of this forum
            felipe@social.treehouse.systems
            wrote last edited by
            #94

            @stragu @Natasha_Jay @drahardja someone says its AI generated, maybe that?

            1 Reply Last reply
            0
            • G goedelchen@mastodontech.de

              @taurus @Natasha_Jay Wait, "plausible sounding percentages" is not enough checking?

              taurus@thicc.horseT This user is from outside of this forum
              taurus@thicc.horseT This user is from outside of this forum
              taurus@thicc.horse
              wrote last edited by
              #95

              @Natasha_Jay @goedelchen uh no perhaps one should actually check

              you know... the things a business does since... ever? Since the invention of business you are having an eye on the operation

              taurus@thicc.horseT 1 Reply Last reply
              0
              • tankgrrl@hachyderm.ioT This user is from outside of this forum
                tankgrrl@hachyderm.ioT This user is from outside of this forum
                tankgrrl@hachyderm.io
                wrote last edited by
                #96

                @Natasha_Jay "Flounder, you can't spend your whole life worrying about your mistakes! You fucked ed up. You trusted us! Hey, make the best of it!"

                1 Reply Last reply
                0
                • taurus@thicc.horseT taurus@thicc.horse

                  @Natasha_Jay @goedelchen uh no perhaps one should actually check

                  you know... the things a business does since... ever? Since the invention of business you are having an eye on the operation

                  taurus@thicc.horseT This user is from outside of this forum
                  taurus@thicc.horseT This user is from outside of this forum
                  taurus@thicc.horse
                  wrote last edited by
                  #97

                  @Natasha_Jay @goedelchen ah I think I answered too seriouy
                  may I advice using a /s when saying things the average stupid person would write non-jokingly

                  1 Reply Last reply
                  0
                  • quantillion@mstdn.ioQ quantillion@mstdn.io

                    @Natasha_Jay
                    Well, AI for professionals & experts is a tool for the expert who is in charge and responsible.
                    The kind of use of AI described here is for *general public* AI, i.e. where the user has no idea how correct it is & shouldn't even have to care as long as it is reasonably plausible.
                    Professionals, experts & businesses can NEVER blame the "AI" for the hallucinations they take as truth. 🤷

                    toriver@mas.toT This user is from outside of this forum
                    toriver@mas.toT This user is from outside of this forum
                    toriver@mas.to
                    wrote last edited by
                    #98

                    @Quantillion @Natasha_Jay No, an LLM is a toddler that has been reading a lot of books but don’t understand any of them and just likes words that are next to other words, and then you need to be very precise and provide a lot of details in your questions to make it answer anything close to correct, and the next time you ask the same thing the answer is probably different.

                    But yes, the user bears responsibility as the adult in the relationship.

                    jernej__s@infosec.exchangeJ 1 Reply Last reply
                    0
                    • cgudrian@social.tchncs.deC This user is from outside of this forum
                      cgudrian@social.tchncs.deC This user is from outside of this forum
                      cgudrian@social.tchncs.de
                      wrote last edited by
                      #99

                      @Natasha_Jay Sounds plausible.

                      1 Reply Last reply
                      0
                      • post_reader@wehavecookies.socialP This user is from outside of this forum
                        post_reader@wehavecookies.socialP This user is from outside of this forum
                        post_reader@wehavecookies.social
                        wrote last edited by
                        #100

                        @Natasha_Jay
                        He he they are using randomization to make chat bot look more "intelligent"
                        See this:
                        https://towardsdatascience.com/llms-are-randomized-algorithms/

                        1 Reply Last reply
                        0
                        • canleaf@mastodon.socialC This user is from outside of this forum
                          canleaf@mastodon.socialC This user is from outside of this forum
                          canleaf@mastodon.social
                          wrote last edited by
                          #101

                          @Natasha_Jay Vibe work is not work.

                          1 Reply Last reply
                          0
                          • davidr@hachyderm.ioD davidr@hachyderm.io

                            @Epic_Null @Natasha_Jay That's bad, but honestly--switching to a new system without ever double checking anything?

                            Everyone involved should be fired, including the #AI

                            D This user is from outside of this forum
                            D This user is from outside of this forum
                            drchaos@sauropods.win
                            wrote last edited by
                            #102

                            @davidr @Epic_Null @Natasha_Jay Fire the bloody management. They keep pushing for "use more AI". If you don't, you are considered to not be a team player, be obstructive, hinder the company and all these things.

                            1 Reply Last reply
                            0
                            • taurus@thicc.horseT taurus@thicc.horse

                              @Epic_Null @Natasha_Jay @davidr yeah it's a system failure.
                              the failure is so bad you need to investigate how such a bad decision could have ever been made and you need to change your process

                              overtondoors@infosec.exchangeO This user is from outside of this forum
                              overtondoors@infosec.exchangeO This user is from outside of this forum
                              overtondoors@infosec.exchange
                              wrote last edited by
                              #103

                              @taurus @Epic_Null @Natasha_Jay @davidr but... But... That would lead right up to the board of directors and shareholders. These people are by definition faultless. The eminent purpose of a corporation is to extract wealth without consequences reaching this select group of shitheads.

                              1 Reply Last reply
                              0
                              • float13@masto.hackers.townF This user is from outside of this forum
                                float13@masto.hackers.townF This user is from outside of this forum
                                float13@masto.hackers.town
                                wrote last edited by
                                #104

                                @Natasha_Jay

                                1 Reply Last reply
                                0
                                • soulshine@mastodon.socialS This user is from outside of this forum
                                  soulshine@mastodon.socialS This user is from outside of this forum
                                  soulshine@mastodon.social
                                  wrote last edited by
                                  #105

                                  @Natasha_Jay

                                  I don't get it. How did people immediately trust AI as soon as our fascist techbro overlords ordered us to?
                                  Most of our friends ask chat GPT for all their important life decisions now.
                                  It takes extremely obvious fuckups like the Flock Superbowl ad to make people pause for a second. Usually, we gobble up whatever the oligarchs ram down our throats.

                                  1 Reply Last reply
                                  0
                                  • nerde@beige.partyN nerde@beige.party

                                    @Kierkegaanks @Natasha_Jay
                                    This proves AI can replace CFOs and CEOs!

                                    rmhogervorst@friendsofdesoto.socialR This user is from outside of this forum
                                    rmhogervorst@friendsofdesoto.socialR This user is from outside of this forum
                                    rmhogervorst@friendsofdesoto.social
                                    wrote last edited by
                                    #106

                                    @Nerde

                                    @Kierkegaanks @Natasha_Jay I often think the only people AI can actually replace are CEOs . Waxing about vision, constructing strategies without actual content. No concern for actual truth.

                                    nini@oldbytes.spaceN 1 Reply Last reply
                                    0
                                    • nerde@beige.partyN nerde@beige.party

                                      @Kierkegaanks @Natasha_Jay
                                      This proves AI can replace CFOs and CEOs!

                                      andreas_tengicki@hessen.socialA This user is from outside of this forum
                                      andreas_tengicki@hessen.socialA This user is from outside of this forum
                                      andreas_tengicki@hessen.social
                                      wrote last edited by
                                      #107

                                      @Nerde @Kierkegaanks @Natasha_Jay That are the easiest to replace people in most bigger companies.

                                      1 Reply Last reply
                                      0
                                      • bnlandor@mastodon.socialB bnlandor@mastodon.social

                                        @Natasha_Jay How does anyone think LLMs base anything on facts or data? They are plausabiliy machines, designed to flood the zone.

                                        rozeboosje@masto.aiR This user is from outside of this forum
                                        rozeboosje@masto.aiR This user is from outside of this forum
                                        rozeboosje@masto.ai
                                        wrote last edited by
                                        #108

                                        @bnlandor @Natasha_Jay

                                        Facts, no. But data, of course. Tons and tons of data, with no ability whatsoever to determine the quality of those data. LLMs learn how *these* kinds of data lead to *those* kinds of output, and that is what they do. They have no way of knowing whether output makes sense, whether it's correct or not, whether it's accurate or not. But they WILL spew out their output with an air of total confidence.

                                        bnlandor@mastodon.socialB 1 Reply Last reply
                                        0
                                        • rozeboosje@masto.aiR rozeboosje@masto.ai

                                          @bnlandor @Natasha_Jay

                                          Facts, no. But data, of course. Tons and tons of data, with no ability whatsoever to determine the quality of those data. LLMs learn how *these* kinds of data lead to *those* kinds of output, and that is what they do. They have no way of knowing whether output makes sense, whether it's correct or not, whether it's accurate or not. But they WILL spew out their output with an air of total confidence.

                                          bnlandor@mastodon.socialB This user is from outside of this forum
                                          bnlandor@mastodon.socialB This user is from outside of this forum
                                          bnlandor@mastodon.social
                                          wrote last edited by
                                          #109

                                          @rozeboosje @Natasha_Jay the difference between "(actual) data", aka facts, and "types of data" doing the heavy lifting here. Any data it learns from is a placeholder for the shape of data to use, so it can randomize it freely.

                                          That's the very reason LLMs cannot count the number of vowels in a word. They "know" the expected answer is a low integer (type of data), but have no clue about the actual value (data).

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups