Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. "AI can make mistakes, always check the results"

"AI can make mistakes, always check the results"

Scheduled Pinned Locked Moved Uncategorized
50 Posts 39 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

    "AI can make mistakes, always check the results"

    I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

    You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

    What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

    Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
    https://thepit.social/@peter/116205452673914720

    matty@blahaj.zoneM This user is from outside of this forum
    matty@blahaj.zoneM This user is from outside of this forum
    matty@blahaj.zone
    wrote last edited by
    #26

    @jenniferplusplus@hachyderm.io Also, I feel it just undermines LLM being actually useful if I had to manually search it up to verify it.

    1 Reply Last reply
    0
    • kerravonsen@mastodon.auK kerravonsen@mastodon.au

      @flippac @emily_s @jenniferplusplus Fiiiiine, there are also hardware errors; but doesn't that again come back to the human who designed the hardware?

      flippac@types.plF This user is from outside of this forum
      flippac@types.plF This user is from outside of this forum
      flippac@types.pl
      wrote last edited by
      #27

      @kerravonsen @emily_s @jenniferplusplus Not always: sometimes it's being used outside the design spec, sometimes that's because the design spec wasn't communicated clearly but not always, etc etc.

      "When someone says 'computer error' rather than something more specific they're probably full of it" I'm fine with, but one of the realities of computing machines as opposed to the mathematical abstraction of computing is that like all machines they have a non-zero failure rate - even if it's pretty damn tiny.

      Now, the amount of shite practice out there re error tolerance/resilience? Sure, we can talk about that (or skip it, because neither of us are newbies here). But bitflips absolutely happen in the wild, especially if someone didn't realise what it really took to keep their machine cool enough.

      1 Reply Last reply
      0
      • kerravonsen@mastodon.auK kerravonsen@mastodon.au

        @flippac @emily_s @jenniferplusplus
        See also the Year 2038 problem. https://en.wikipedia.org/wiki/Year_2038_problem -- is that a computer error or a programmer error?

        flippac@types.plF This user is from outside of this forum
        flippac@types.plF This user is from outside of this forum
        flippac@types.pl
        wrote last edited by
        #28

        @kerravonsen @emily_s @jenniferplusplus BCD existed: if I'm old enough to talk about FDIV I certainly remember the long buildup to Y2K (including everyone running into it while computing about the future)

        flippac@types.plF 1 Reply Last reply
        0
        • flippac@types.plF flippac@types.pl

          @kerravonsen @emily_s @jenniferplusplus BCD existed: if I'm old enough to talk about FDIV I certainly remember the long buildup to Y2K (including everyone running into it while computing about the future)

          flippac@types.plF This user is from outside of this forum
          flippac@types.plF This user is from outside of this forum
          flippac@types.pl
          wrote last edited by
          #29

          @kerravonsen @emily_s @jenniferplusplus The Epochalypse specifically is worse, mind: it's an entirely reasonable (initially implicit-spec) "holy shit we did not build this to work for that long and you did it anyway" problem that originated when the relevant software wasn't a piece of critical infrastructure.

          For banks and the like, Y2K was expected long-term maintenance.

          The epochalypse is, realistically, user error.

          1 Reply Last reply
          0
          • R relay@relay.mycrowd.ca shared this topic
            R relay@relay.infosec.exchange shared this topic
          • mighty_orbot@retro.pizzaM mighty_orbot@retro.pizza

            @jenniferplusplus Saying “AI can make mistakes” is exactly like saying “An adjustable rate mortgage can increase the interest rate at any time.” It’s not a question of “if”, but “how soon is it possible?”

            n_dimension@infosec.exchangeN This user is from outside of this forum
            n_dimension@infosec.exchangeN This user is from outside of this forum
            n_dimension@infosec.exchange
            wrote last edited by
            #30

            @mighty_orbot @jenniferplusplus

            I would really love to live in your world.

            Humans around me fuck up all the time.
            Most of the time they will won't even apologise when they are sprung on their "hallucination"

            And they don't come with a warning sticker

            1 Reply Last reply
            0
            • crystal_fish_caves@mstdn.partyC crystal_fish_caves@mstdn.party

              @jenniferplusplus right?! What else would you buy if right on the lable it said "this may not be what we say it is" ??

              So it may not be correct information, you don't know which part. You are using it to not have to do the legwork yourself. Do you
              Take what it gave you, fingers crossed the wrong bits are not too bad
              Or
              Do legwork to figure out what is wrong defeating the purpose?
              AND how do know your source is correct?

              #Ai continuing to learn will keep reintroducing bogusness exponentially!?

              gbargoud@masto.nycG This user is from outside of this forum
              gbargoud@masto.nycG This user is from outside of this forum
              gbargoud@masto.nyc
              wrote last edited by
              #31

              @Crystal_Fish_Caves @jenniferplusplus

              This does remind me of this fucking weirdness when buying a house:

              Link Preview Image
              Title insurance - Wikipedia

              favicon

              (en.wikipedia.org)

              A lot of the US does not have the government keep track of who owns what land so when you buy a place, you need to also buy insurance that says that you are actually buying it from someone able to sell it.

              As far as I can tell every other country just has a department that you can ask "hey is this the owner" and trust the answer.

              azonenberg@ioc.exchangeA 1 Reply Last reply
              0
              • kerravonsen@mastodon.auK kerravonsen@mastodon.au

                @flippac @emily_s @jenniferplusplus Fiiiiine, there are also hardware errors; but doesn't that again come back to the human who designed the hardware?

                jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                jenniferplusplus@hachyderm.io
                wrote last edited by
                #32

                @kerravonsen hey just to be clear, you're doing it right now. You're saying the computer is permitted to be wrong. The consequences will land on whoever was able to avoid them, and they will deserve it for not getting out of the way

                kerravonsen@mastodon.auK 2 Replies Last reply
                0
                • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                  "AI can make mistakes, always check the results"

                  I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                  You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                  What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                  Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                  https://thepit.social/@peter/116205452673914720

                  D This user is from outside of this forum
                  D This user is from outside of this forum
                  daniel_blake@mastodon.top
                  wrote last edited by
                  #33

                  @jenniferplusplus

                  I think that being liable for the mistakes an AI that you use is only fair... They who live by the sword etc.

                  soulsource@mastodon.gamedev.placeS 1 Reply Last reply
                  0
                  • D daniel_blake@mastodon.top

                    @jenniferplusplus

                    I think that being liable for the mistakes an AI that you use is only fair... They who live by the sword etc.

                    soulsource@mastodon.gamedev.placeS This user is from outside of this forum
                    soulsource@mastodon.gamedev.placeS This user is from outside of this forum
                    soulsource@mastodon.gamedev.place
                    wrote last edited by
                    #34

                    @Daniel_Blake @jenniferplusplus The problems start if you aren't using the AI because you want to, but because you got ordered to use it.

                    Cory Doctorow has written a lot about what he calls Reverse Centaurs - persons having to work for a machine instead of persons using a machine. For instance:
                    https://pluralistic.net/2025/12/05/pop-that-bubble/#u-washington

                    1 Reply Last reply
                    0
                    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                      "AI can make mistakes, always check the results"

                      I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                      You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                      What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                      Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                      https://thepit.social/@peter/116205452673914720

                      P This user is from outside of this forum
                      P This user is from outside of this forum
                      pidaroso@mastodon.social
                      wrote last edited by
                      #35

                      @jenniferplusplus s

                      1 Reply Last reply
                      0
                      • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                        "AI can make mistakes, always check the results"

                        I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                        You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                        What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                        Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                        https://thepit.social/@peter/116205452673914720

                        bredroll@mas.toB This user is from outside of this forum
                        bredroll@mas.toB This user is from outside of this forum
                        bredroll@mas.to
                        wrote last edited by
                        #36

                        @jenniferplusplus i agree, but I also think that LLMs being unreliable is part of the business model, if it gave acceptable answers first time you'd only ask one question, if it messes up slightly you type more stuff, you rephrase the prompt or rewrite the spec, all of which are more tokens that your org will actually pay for. its like builtin #enshittification from the start.

                        1 Reply Last reply
                        0
                        • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                          "AI can make mistakes, always check the results"

                          I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                          You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                          What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                          Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                          https://thepit.social/@peter/116205452673914720

                          waimeafalls@ohai.socialW This user is from outside of this forum
                          waimeafalls@ohai.socialW This user is from outside of this forum
                          waimeafalls@ohai.social
                          wrote last edited by
                          #37

                          @jenniferplusplus AI appears to "learn from its mistakes" and amplify them...

                          1 Reply Last reply
                          0
                          • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                            "AI can make mistakes, always check the results"

                            I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                            You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                            What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                            Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                            https://thepit.social/@peter/116205452673914720

                            keithjones@social.vivaldi.netK This user is from outside of this forum
                            keithjones@social.vivaldi.netK This user is from outside of this forum
                            keithjones@social.vivaldi.net
                            wrote last edited by
                            #38

                            @jenniferplusplus
                            AI is a nifty tool, but blindly trusting its output is foolish. AI should not be treated as an unquestionable authority, which I've personally see happen in the workplace. The novelty of AI makes it enjoyable for now, yet companies rushing to replace human experience and expertise with AI will soon see quality erode and trust vanish altogether. When that happens these companies will learn that once quality and trust are lost, winning them back is far harder than maintaining them.

                            1 Reply Last reply
                            0
                            • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                              "AI can make mistakes, always check the results"

                              I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                              You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                              What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                              Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                              https://thepit.social/@peter/116205452673914720

                              madengineering@mastodon.cloudM This user is from outside of this forum
                              madengineering@mastodon.cloudM This user is from outside of this forum
                              madengineering@mastodon.cloud
                              wrote last edited by
                              #39

                              @jenniferplusplus ai is the intern on seven tabs of acid. He can no longer tell the difference between truth and fiction, and this will lead to lots of mistakes, most of which will lead to you staring at your monitor in confusion.
                              You must at a minimum verify the work, make sure it corresponds to reality, and get ready to wtf.

                              1 Reply Last reply
                              0
                              • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                "AI can make mistakes, always check the results"

                                I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                                You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                                What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                                Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                                https://thepit.social/@peter/116205452673914720

                                janxdevil@sfba.socialJ This user is from outside of this forum
                                janxdevil@sfba.socialJ This user is from outside of this forum
                                janxdevil@sfba.social
                                wrote last edited by
                                #40

                                @jenniferplusplus Note well: whether “you” are actually liable for the errors made in the output produce by AI in response to your prompting depends entirely on whether “you” are someone privileged with impunity for your own errors in judgment or you instead are someone accountable for forced errors outside your own control.

                                1 Reply Last reply
                                0
                                • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                  @kerravonsen hey just to be clear, you're doing it right now. You're saying the computer is permitted to be wrong. The consequences will land on whoever was able to avoid them, and they will deserve it for not getting out of the way

                                  kerravonsen@mastodon.auK This user is from outside of this forum
                                  kerravonsen@mastodon.auK This user is from outside of this forum
                                  kerravonsen@mastodon.au
                                  wrote last edited by
                                  #41

                                  @jenniferplusplus I am quite confused as to how you concluded that I said that, when I've been pointing out that it is human error

                                  1 Reply Last reply
                                  0
                                  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                    "AI can make mistakes, always check the results"

                                    I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                                    You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                                    What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                                    Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                                    https://thepit.social/@peter/116205452673914720

                                    worik@mastodon.socialW This user is from outside of this forum
                                    worik@mastodon.socialW This user is from outside of this forum
                                    worik@mastodon.social
                                    wrote last edited by
                                    #42

                                    @jenniferplusplus

                                    LLMs do not make mistakes on their own, you make mistakes using them

                                    > "AI can make mistakes, always check the results"

                                    > I fucking loathe this phrase and everything that goes into it.

                                    Why? It is good advice and important when using LLMs.

                                    I use LLMs every day in my coding practice, and they do make errors (thank you compiler)

                                    LLMs are a tool, and must be wielded. When you use them you are responsible for the results

                                    1 Reply Last reply
                                    0
                                    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                      "AI can make mistakes, always check the results"

                                      I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                                      You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                                      What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                                      Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                                      https://thepit.social/@peter/116205452673914720

                                      L This user is from outside of this forum
                                      L This user is from outside of this forum
                                      luc0x61@mastodon.gamedev.place
                                      wrote last edited by
                                      #43

                                      @jenniferplusplus There's a misunderstanding, an "AI can" is like a "worms can", that's the subject. Now it all makes sense.

                                      1 Reply Last reply
                                      0
                                      • kerravonsen@mastodon.auK kerravonsen@mastodon.au

                                        @emily_s @jenniferplusplus
                                        As a computer programmer, yes. There is no such thing as a computer error. It is one or more of:
                                        * programmer error
                                        * documentation error
                                        * user error (with a side-order of either documentation error or "user didn't bother to read the documentation")

                                        srvanderplas@datavis.socialS This user is from outside of this forum
                                        srvanderplas@datavis.socialS This user is from outside of this forum
                                        srvanderplas@datavis.social
                                        wrote last edited by
                                        #44

                                        @kerravonsen @emily_s @jenniferplusplus or a gamma ray and bit flip. But that should probably be caught.

                                        1 Reply Last reply
                                        0
                                        • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                          @kerravonsen hey just to be clear, you're doing it right now. You're saying the computer is permitted to be wrong. The consequences will land on whoever was able to avoid them, and they will deserve it for not getting out of the way

                                          kerravonsen@mastodon.auK This user is from outside of this forum
                                          kerravonsen@mastodon.auK This user is from outside of this forum
                                          kerravonsen@mastodon.au
                                          wrote last edited by
                                          #45

                                          @jenniferplusplus The computer is wrongly permitted to be wrong. I thought I was agreeing with you.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups