Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. "AI can make mistakes, always check the results"

"AI can make mistakes, always check the results"

Scheduled Pinned Locked Moved Uncategorized
50 Posts 39 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

    "AI can make mistakes, always check the results"

    I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

    You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

    What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

    Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
    https://thepit.social/@peter/116205452673914720

    keithjones@social.vivaldi.netK This user is from outside of this forum
    keithjones@social.vivaldi.netK This user is from outside of this forum
    keithjones@social.vivaldi.net
    wrote last edited by
    #38

    @jenniferplusplus
    AI is a nifty tool, but blindly trusting its output is foolish. AI should not be treated as an unquestionable authority, which I've personally see happen in the workplace. The novelty of AI makes it enjoyable for now, yet companies rushing to replace human experience and expertise with AI will soon see quality erode and trust vanish altogether. When that happens these companies will learn that once quality and trust are lost, winning them back is far harder than maintaining them.

    1 Reply Last reply
    0
    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

      "AI can make mistakes, always check the results"

      I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

      You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

      What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

      Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
      https://thepit.social/@peter/116205452673914720

      madengineering@mastodon.cloudM This user is from outside of this forum
      madengineering@mastodon.cloudM This user is from outside of this forum
      madengineering@mastodon.cloud
      wrote last edited by
      #39

      @jenniferplusplus ai is the intern on seven tabs of acid. He can no longer tell the difference between truth and fiction, and this will lead to lots of mistakes, most of which will lead to you staring at your monitor in confusion.
      You must at a minimum verify the work, make sure it corresponds to reality, and get ready to wtf.

      1 Reply Last reply
      0
      • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

        "AI can make mistakes, always check the results"

        I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

        You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

        What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

        Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
        https://thepit.social/@peter/116205452673914720

        janxdevil@sfba.socialJ This user is from outside of this forum
        janxdevil@sfba.socialJ This user is from outside of this forum
        janxdevil@sfba.social
        wrote last edited by
        #40

        @jenniferplusplus Note well: whether “you” are actually liable for the errors made in the output produce by AI in response to your prompting depends entirely on whether “you” are someone privileged with impunity for your own errors in judgment or you instead are someone accountable for forced errors outside your own control.

        1 Reply Last reply
        0
        • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

          @kerravonsen hey just to be clear, you're doing it right now. You're saying the computer is permitted to be wrong. The consequences will land on whoever was able to avoid them, and they will deserve it for not getting out of the way

          kerravonsen@mastodon.auK This user is from outside of this forum
          kerravonsen@mastodon.auK This user is from outside of this forum
          kerravonsen@mastodon.au
          wrote last edited by
          #41

          @jenniferplusplus I am quite confused as to how you concluded that I said that, when I've been pointing out that it is human error

          1 Reply Last reply
          0
          • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

            "AI can make mistakes, always check the results"

            I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

            You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

            What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

            Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
            https://thepit.social/@peter/116205452673914720

            worik@mastodon.socialW This user is from outside of this forum
            worik@mastodon.socialW This user is from outside of this forum
            worik@mastodon.social
            wrote last edited by
            #42

            @jenniferplusplus

            LLMs do not make mistakes on their own, you make mistakes using them

            > "AI can make mistakes, always check the results"

            > I fucking loathe this phrase and everything that goes into it.

            Why? It is good advice and important when using LLMs.

            I use LLMs every day in my coding practice, and they do make errors (thank you compiler)

            LLMs are a tool, and must be wielded. When you use them you are responsible for the results

            1 Reply Last reply
            0
            • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

              "AI can make mistakes, always check the results"

              I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

              You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

              What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

              Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
              https://thepit.social/@peter/116205452673914720

              L This user is from outside of this forum
              L This user is from outside of this forum
              luc0x61@mastodon.gamedev.place
              wrote last edited by
              #43

              @jenniferplusplus There's a misunderstanding, an "AI can" is like a "worms can", that's the subject. Now it all makes sense.

              1 Reply Last reply
              0
              • kerravonsen@mastodon.auK kerravonsen@mastodon.au

                @emily_s @jenniferplusplus
                As a computer programmer, yes. There is no such thing as a computer error. It is one or more of:
                * programmer error
                * documentation error
                * user error (with a side-order of either documentation error or "user didn't bother to read the documentation")

                srvanderplas@datavis.socialS This user is from outside of this forum
                srvanderplas@datavis.socialS This user is from outside of this forum
                srvanderplas@datavis.social
                wrote last edited by
                #44

                @kerravonsen @emily_s @jenniferplusplus or a gamma ray and bit flip. But that should probably be caught.

                1 Reply Last reply
                0
                • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                  @kerravonsen hey just to be clear, you're doing it right now. You're saying the computer is permitted to be wrong. The consequences will land on whoever was able to avoid them, and they will deserve it for not getting out of the way

                  kerravonsen@mastodon.auK This user is from outside of this forum
                  kerravonsen@mastodon.auK This user is from outside of this forum
                  kerravonsen@mastodon.au
                  wrote last edited by
                  #45

                  @jenniferplusplus The computer is wrongly permitted to be wrong. I thought I was agreeing with you.

                  1 Reply Last reply
                  0
                  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                    "AI can make mistakes, always check the results"

                    I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                    You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                    What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                    Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                    https://thepit.social/@peter/116205452673914720

                    S This user is from outside of this forum
                    S This user is from outside of this forum
                    spacelifeform@infosec.exchange
                    wrote last edited by
                    #46

                    @jenniferplusplus

                    AI *WILL* make mistakes. Do not use.

                    1 Reply Last reply
                    0
                    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                      "AI can make mistakes, always check the results"

                      I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                      You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                      What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                      Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                      https://thepit.social/@peter/116205452673914720

                      bolomkxxviii@mastodon.socialB This user is from outside of this forum
                      bolomkxxviii@mastodon.socialB This user is from outside of this forum
                      bolomkxxviii@mastodon.social
                      wrote last edited by
                      #47

                      @jenniferplusplus
                      They want us to pay for a service they won't stand behind. That should tell you everything you need to know.

                      1 Reply Last reply
                      0
                      • gbargoud@masto.nycG gbargoud@masto.nyc

                        @Crystal_Fish_Caves @jenniferplusplus

                        This does remind me of this fucking weirdness when buying a house:

                        Link Preview Image
                        Title insurance - Wikipedia

                        favicon

                        (en.wikipedia.org)

                        A lot of the US does not have the government keep track of who owns what land so when you buy a place, you need to also buy insurance that says that you are actually buying it from someone able to sell it.

                        As far as I can tell every other country just has a department that you can ask "hey is this the owner" and trust the answer.

                        azonenberg@ioc.exchangeA This user is from outside of this forum
                        azonenberg@ioc.exchangeA This user is from outside of this forum
                        azonenberg@ioc.exchange
                        wrote last edited by
                        #48

                        @gbargoud @Crystal_Fish_Caves @jenniferplusplus if the American insurance industry can find a way to require insurance for something, they will

                        1 Reply Last reply
                        1
                        0
                        • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                          "AI can make mistakes, always check the results"

                          I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                          You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                          What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                          Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                          https://thepit.social/@peter/116205452673914720

                          hasen@hachyderm.ioH This user is from outside of this forum
                          hasen@hachyderm.ioH This user is from outside of this forum
                          hasen@hachyderm.io
                          wrote last edited by
                          #49

                          @jenniferplusplus fr fr

                          1 Reply Last reply
                          0
                          • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                            "AI can make mistakes, always check the results"

                            I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                            You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                            What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                            Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                            https://thepit.social/@peter/116205452673914720

                            foriamcj@infosec.exchangeF This user is from outside of this forum
                            foriamcj@infosec.exchangeF This user is from outside of this forum
                            foriamcj@infosec.exchange
                            wrote last edited by
                            #50

                            @jenniferplusplus

                            "What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                            Agreed... but also.. this is by design

                            "They" are intentionally designing a system that will (both intentionally & negligently) be used to inflict harms.. while also removing any "accountability" for the harms they inflict

                            A normal reasonable person sees that old slide deck from IBM about how:
                            "computers cannot be trusted to make decisions because computers can never be held accountable" as a dystopian warning

                            Tech Bros see it as:
                            "an opportunity to profit from 'Creating the Torment Nexus' while insulating themselves from any consequences for their own actions"

                            1 Reply Last reply
                            1
                            0
                            Reply
                            • Reply as topic
                            Log in to reply
                            • Oldest to Newest
                            • Newest to Oldest
                            • Most Votes


                            • Login

                            • Login or register to search.
                            • First post
                              Last post
                            0
                            • Categories
                            • Recent
                            • Tags
                            • Popular
                            • World
                            • Users
                            • Groups