Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. "AI can make mistakes, always check the results"

"AI can make mistakes, always check the results"

Scheduled Pinned Locked Moved Uncategorized
50 Posts 39 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
    jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
    jenniferplusplus@hachyderm.io
    wrote last edited by
    #1

    "AI can make mistakes, always check the results"

    I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

    You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

    What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

    Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
    https://thepit.social/@peter/116205452673914720

    ozzelot@mstdn.socialO emily_s@mastodon.me.ukE crystal_fish_caves@mstdn.partyC mighty_orbot@retro.pizzaM danschnau@mastodon.socialD 28 Replies Last reply
    2
    0
    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

      "AI can make mistakes, always check the results"

      I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

      You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

      What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

      Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
      https://thepit.social/@peter/116205452673914720

      ozzelot@mstdn.socialO This user is from outside of this forum
      ozzelot@mstdn.socialO This user is from outside of this forum
      ozzelot@mstdn.social
      wrote last edited by
      #2

      @jenniferplusplus I believe it does not even make mistakes in the conventional sense, as mistakes require an ability to pursue truth.

      pikesley@mastodon.me.ukP 1 Reply Last reply
      0
      • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

        "AI can make mistakes, always check the results"

        I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

        You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

        What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

        Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
        https://thepit.social/@peter/116205452673914720

        emily_s@mastodon.me.ukE This user is from outside of this forum
        emily_s@mastodon.me.ukE This user is from outside of this forum
        emily_s@mastodon.me.uk
        wrote last edited by
        #3

        @jenniferplusplus this. The fact that we allowed companies to get away with "computer says no" for so long led to this point. If we'd beat them around the head a decade to two back, with "and who owns the computer?! Who programmed it?! A human is responsible for this somewhere" then this technology would not have taken off anywhere close to as well.

        Can you imagine the liability insurance open AI would have to buy if you could sue them for incorrect results?

        misusecase@twit.socialM kerravonsen@mastodon.auK 2 Replies Last reply
        0
        • ozzelot@mstdn.socialO ozzelot@mstdn.social

          @jenniferplusplus I believe it does not even make mistakes in the conventional sense, as mistakes require an ability to pursue truth.

          pikesley@mastodon.me.ukP This user is from outside of this forum
          pikesley@mastodon.me.ukP This user is from outside of this forum
          pikesley@mastodon.me.uk
          wrote last edited by
          #4

          @ozzelot @jenniferplusplus it's all "hallucination", sometimes it's incidentally correct

          drdrowland@fediscience.orgD 1 Reply Last reply
          0
          • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

            "AI can make mistakes, always check the results"

            I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

            You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

            What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

            Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
            https://thepit.social/@peter/116205452673914720

            crystal_fish_caves@mstdn.partyC This user is from outside of this forum
            crystal_fish_caves@mstdn.partyC This user is from outside of this forum
            crystal_fish_caves@mstdn.party
            wrote last edited by
            #5

            @jenniferplusplus right?! What else would you buy if right on the lable it said "this may not be what we say it is" ??

            So it may not be correct information, you don't know which part. You are using it to not have to do the legwork yourself. Do you
            Take what it gave you, fingers crossed the wrong bits are not too bad
            Or
            Do legwork to figure out what is wrong defeating the purpose?
            AND how do know your source is correct?

            #Ai continuing to learn will keep reintroducing bogusness exponentially!?

            jenniferplusplus@hachyderm.ioJ gbargoud@masto.nycG 2 Replies Last reply
            0
            • emily_s@mastodon.me.ukE emily_s@mastodon.me.uk

              @jenniferplusplus this. The fact that we allowed companies to get away with "computer says no" for so long led to this point. If we'd beat them around the head a decade to two back, with "and who owns the computer?! Who programmed it?! A human is responsible for this somewhere" then this technology would not have taken off anywhere close to as well.

              Can you imagine the liability insurance open AI would have to buy if you could sue them for incorrect results?

              misusecase@twit.socialM This user is from outside of this forum
              misusecase@twit.socialM This user is from outside of this forum
              misusecase@twit.social
              wrote last edited by
              #6

              @emily_s @jenniferplusplus We totally memory-holed all that stuff about machine learning algorithms (really the same thing as AI, but the branding was different back then) and all the hype about how they’d make unbiased decisions. How did that turn out?

              Oh yeah. Garbage in, garbage out.

              emily_s@mastodon.me.ukE 1 Reply Last reply
              0
              • crystal_fish_caves@mstdn.partyC crystal_fish_caves@mstdn.party

                @jenniferplusplus right?! What else would you buy if right on the lable it said "this may not be what we say it is" ??

                So it may not be correct information, you don't know which part. You are using it to not have to do the legwork yourself. Do you
                Take what it gave you, fingers crossed the wrong bits are not too bad
                Or
                Do legwork to figure out what is wrong defeating the purpose?
                AND how do know your source is correct?

                #Ai continuing to learn will keep reintroducing bogusness exponentially!?

                jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
                jenniferplusplus@hachyderm.io
                wrote last edited by
                #7

                @Crystal_Fish_Caves what would I buy? Very little.

                But, a lot more people than we like to think are gambling addicts. This hits the same psychological exploit as trading card packs, blind boxes, and loot crates. And a lot of the people who are the most vigorous proponents are effectively playing with someone else's money

                1 Reply Last reply
                0
                • misusecase@twit.socialM misusecase@twit.social

                  @emily_s @jenniferplusplus We totally memory-holed all that stuff about machine learning algorithms (really the same thing as AI, but the branding was different back then) and all the hype about how they’d make unbiased decisions. How did that turn out?

                  Oh yeah. Garbage in, garbage out.

                  emily_s@mastodon.me.ukE This user is from outside of this forum
                  emily_s@mastodon.me.ukE This user is from outside of this forum
                  emily_s@mastodon.me.uk
                  wrote last edited by
                  #8

                  @MisuseCase @jenniferplusplus this isn't even that. This was companies setting up their systems so that when the computer says no that's it. They claim they can't do anything about it. Some how they got people to forget that someone programmed that computer to do that. It's not inevitable, it's not carved into the fabric of the universe, it's a few magnetic fields on a disk of rust that a human made and encoded. It can be changed. They just didn't want to and got away with it

                  kerravonsen@mastodon.auK 1 Reply Last reply
                  0
                  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                    "AI can make mistakes, always check the results"

                    I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                    You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                    What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                    Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                    https://thepit.social/@peter/116205452673914720

                    mighty_orbot@retro.pizzaM This user is from outside of this forum
                    mighty_orbot@retro.pizzaM This user is from outside of this forum
                    mighty_orbot@retro.pizza
                    wrote last edited by
                    #9

                    @jenniferplusplus Saying “AI can make mistakes” is exactly like saying “An adjustable rate mortgage can increase the interest rate at any time.” It’s not a question of “if”, but “how soon is it possible?”

                    n_dimension@infosec.exchangeN 1 Reply Last reply
                    0
                    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                      "AI can make mistakes, always check the results"

                      I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                      You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                      What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                      Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                      https://thepit.social/@peter/116205452673914720

                      danschnau@mastodon.socialD This user is from outside of this forum
                      danschnau@mastodon.socialD This user is from outside of this forum
                      danschnau@mastodon.social
                      wrote last edited by
                      #10

                      @jenniferplusplus yeah it's a weak ass "CYA" for the AI vendors

                      1 Reply Last reply
                      0
                      • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                        "AI can make mistakes, always check the results"

                        I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                        You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                        What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                        Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                        https://thepit.social/@peter/116205452673914720

                        xs4me2@mastodon.socialX This user is from outside of this forum
                        xs4me2@mastodon.socialX This user is from outside of this forum
                        xs4me2@mastodon.social
                        wrote last edited by
                        #11

                        @jenniferplusplus

                        True…

                        1 Reply Last reply
                        0
                        • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                          "AI can make mistakes, always check the results"

                          I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                          You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                          What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                          Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                          https://thepit.social/@peter/116205452673914720

                          lritter@mastodon.gamedev.placeL This user is from outside of this forum
                          lritter@mastodon.gamedev.placeL This user is from outside of this forum
                          lritter@mastodon.gamedev.place
                          wrote last edited by
                          #12

                          @jenniferplusplus scam culture

                          1 Reply Last reply
                          0
                          • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                            "AI can make mistakes, always check the results"

                            I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                            You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                            What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                            Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                            https://thepit.social/@peter/116205452673914720

                            maddad@mastodon.worldM This user is from outside of this forum
                            maddad@mastodon.worldM This user is from outside of this forum
                            maddad@mastodon.world
                            wrote last edited by
                            #13

                            @jenniferplusplus

                            It's probably safer and easier to just do the job yourself...

                            1 Reply Last reply
                            0
                            • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                              "AI can make mistakes, always check the results"

                              I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                              You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                              What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                              Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                              https://thepit.social/@peter/116205452673914720

                              acoollady@theatl.socialA This user is from outside of this forum
                              acoollady@theatl.socialA This user is from outside of this forum
                              acoollady@theatl.social
                              wrote last edited by
                              #14

                              @jenniferplusplus They sure came up with an ingenious solution to the trolley problem tho- hide the switch thrower behind a wall and blame the victims for being on the wrong tracks

                              1 Reply Last reply
                              0
                              • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                "AI can make mistakes, always check the results"

                                I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                                You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                                What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                                Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                                https://thepit.social/@peter/116205452673914720

                                lemmus@social.vivaldi.netL This user is from outside of this forum
                                lemmus@social.vivaldi.netL This user is from outside of this forum
                                lemmus@social.vivaldi.net
                                wrote last edited by
                                #15

                                @jenniferplusplus SMBC Comics had a take on that: https://www.smbc-comics.com/comic/blame

                                1 Reply Last reply
                                0
                                • pikesley@mastodon.me.ukP pikesley@mastodon.me.uk

                                  @ozzelot @jenniferplusplus it's all "hallucination", sometimes it's incidentally correct

                                  drdrowland@fediscience.orgD This user is from outside of this forum
                                  drdrowland@fediscience.orgD This user is from outside of this forum
                                  drdrowland@fediscience.org
                                  wrote last edited by
                                  #16

                                  @pikesley @ozzelot @jenniferplusplus

                                  and also they're not people so they don't hallucinate either. chatbots produce noise and the vc firms want that to be our fault.

                                  1 Reply Last reply
                                  0
                                  • R relay@relay.publicsquare.global shared this topic
                                  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                    "AI can make mistakes, always check the results"

                                    I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                                    You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                                    What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                                    Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                                    https://thepit.social/@peter/116205452673914720

                                    hypostase@bsd.networkH This user is from outside of this forum
                                    hypostase@bsd.networkH This user is from outside of this forum
                                    hypostase@bsd.network
                                    wrote last edited by
                                    #17

                                    @jenniferplusplus it's the all care, no responsibility clauses of software licences on speed.
                                    Peak billionaire-hoarder techbro, really, not new, just distilled stench.

                                    1 Reply Last reply
                                    0
                                    • emily_s@mastodon.me.ukE emily_s@mastodon.me.uk

                                      @jenniferplusplus this. The fact that we allowed companies to get away with "computer says no" for so long led to this point. If we'd beat them around the head a decade to two back, with "and who owns the computer?! Who programmed it?! A human is responsible for this somewhere" then this technology would not have taken off anywhere close to as well.

                                      Can you imagine the liability insurance open AI would have to buy if you could sue them for incorrect results?

                                      kerravonsen@mastodon.auK This user is from outside of this forum
                                      kerravonsen@mastodon.auK This user is from outside of this forum
                                      kerravonsen@mastodon.au
                                      wrote last edited by
                                      #18

                                      @emily_s @jenniferplusplus
                                      As a computer programmer, yes. There is no such thing as a computer error. It is one or more of:
                                      * programmer error
                                      * documentation error
                                      * user error (with a side-order of either documentation error or "user didn't bother to read the documentation")

                                      flippac@types.plF srvanderplas@datavis.socialS 2 Replies Last reply
                                      1
                                      0
                                      • emily_s@mastodon.me.ukE emily_s@mastodon.me.uk

                                        @MisuseCase @jenniferplusplus this isn't even that. This was companies setting up their systems so that when the computer says no that's it. They claim they can't do anything about it. Some how they got people to forget that someone programmed that computer to do that. It's not inevitable, it's not carved into the fabric of the universe, it's a few magnetic fields on a disk of rust that a human made and encoded. It can be changed. They just didn't want to and got away with it

                                        kerravonsen@mastodon.auK This user is from outside of this forum
                                        kerravonsen@mastodon.auK This user is from outside of this forum
                                        kerravonsen@mastodon.au
                                        wrote last edited by
                                        #19

                                        @emily_s @MisuseCase @jenniferplusplus

                                        I wouldn't actually blame computers for that; it's just one more iteration of the bureaucratic mindset: The Rules say so, and The Rules can't be changed.

                                        1 Reply Last reply
                                        0
                                        • kerravonsen@mastodon.auK kerravonsen@mastodon.au

                                          @emily_s @jenniferplusplus
                                          As a computer programmer, yes. There is no such thing as a computer error. It is one or more of:
                                          * programmer error
                                          * documentation error
                                          * user error (with a side-order of either documentation error or "user didn't bother to read the documentation")

                                          flippac@types.plF This user is from outside of this forum
                                          flippac@types.plF This user is from outside of this forum
                                          flippac@types.pl
                                          wrote last edited by
                                          #20

                                          @kerravonsen @emily_s @jenniferplusplus While Intel were clearly at fault, I think people on the receiving end of the Pentium FDIV bug could reasonably describe that as a computer error

                                          (there are certainly hardware failures of a pernicious nature)

                                          kerravonsen@mastodon.auK 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups