Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. "AI can make mistakes, always check the results"

"AI can make mistakes, always check the results"

Scheduled Pinned Locked Moved Uncategorized
50 Posts 39 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

    "AI can make mistakes, always check the results"

    I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

    You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

    What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

    Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
    https://thepit.social/@peter/116205452673914720

    crystal_fish_caves@mstdn.partyC This user is from outside of this forum
    crystal_fish_caves@mstdn.partyC This user is from outside of this forum
    crystal_fish_caves@mstdn.party
    wrote last edited by
    #5

    @jenniferplusplus right?! What else would you buy if right on the lable it said "this may not be what we say it is" ??

    So it may not be correct information, you don't know which part. You are using it to not have to do the legwork yourself. Do you
    Take what it gave you, fingers crossed the wrong bits are not too bad
    Or
    Do legwork to figure out what is wrong defeating the purpose?
    AND how do know your source is correct?

    #Ai continuing to learn will keep reintroducing bogusness exponentially!?

    jenniferplusplus@hachyderm.ioJ gbargoud@masto.nycG 2 Replies Last reply
    0
    • emily_s@mastodon.me.ukE emily_s@mastodon.me.uk

      @jenniferplusplus this. The fact that we allowed companies to get away with "computer says no" for so long led to this point. If we'd beat them around the head a decade to two back, with "and who owns the computer?! Who programmed it?! A human is responsible for this somewhere" then this technology would not have taken off anywhere close to as well.

      Can you imagine the liability insurance open AI would have to buy if you could sue them for incorrect results?

      misusecase@twit.socialM This user is from outside of this forum
      misusecase@twit.socialM This user is from outside of this forum
      misusecase@twit.social
      wrote last edited by
      #6

      @emily_s @jenniferplusplus We totally memory-holed all that stuff about machine learning algorithms (really the same thing as AI, but the branding was different back then) and all the hype about how they’d make unbiased decisions. How did that turn out?

      Oh yeah. Garbage in, garbage out.

      emily_s@mastodon.me.ukE 1 Reply Last reply
      0
      • crystal_fish_caves@mstdn.partyC crystal_fish_caves@mstdn.party

        @jenniferplusplus right?! What else would you buy if right on the lable it said "this may not be what we say it is" ??

        So it may not be correct information, you don't know which part. You are using it to not have to do the legwork yourself. Do you
        Take what it gave you, fingers crossed the wrong bits are not too bad
        Or
        Do legwork to figure out what is wrong defeating the purpose?
        AND how do know your source is correct?

        #Ai continuing to learn will keep reintroducing bogusness exponentially!?

        jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
        jenniferplusplus@hachyderm.ioJ This user is from outside of this forum
        jenniferplusplus@hachyderm.io
        wrote last edited by
        #7

        @Crystal_Fish_Caves what would I buy? Very little.

        But, a lot more people than we like to think are gambling addicts. This hits the same psychological exploit as trading card packs, blind boxes, and loot crates. And a lot of the people who are the most vigorous proponents are effectively playing with someone else's money

        1 Reply Last reply
        0
        • misusecase@twit.socialM misusecase@twit.social

          @emily_s @jenniferplusplus We totally memory-holed all that stuff about machine learning algorithms (really the same thing as AI, but the branding was different back then) and all the hype about how they’d make unbiased decisions. How did that turn out?

          Oh yeah. Garbage in, garbage out.

          emily_s@mastodon.me.ukE This user is from outside of this forum
          emily_s@mastodon.me.ukE This user is from outside of this forum
          emily_s@mastodon.me.uk
          wrote last edited by
          #8

          @MisuseCase @jenniferplusplus this isn't even that. This was companies setting up their systems so that when the computer says no that's it. They claim they can't do anything about it. Some how they got people to forget that someone programmed that computer to do that. It's not inevitable, it's not carved into the fabric of the universe, it's a few magnetic fields on a disk of rust that a human made and encoded. It can be changed. They just didn't want to and got away with it

          kerravonsen@mastodon.auK 1 Reply Last reply
          0
          • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

            "AI can make mistakes, always check the results"

            I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

            You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

            What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

            Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
            https://thepit.social/@peter/116205452673914720

            mighty_orbot@retro.pizzaM This user is from outside of this forum
            mighty_orbot@retro.pizzaM This user is from outside of this forum
            mighty_orbot@retro.pizza
            wrote last edited by
            #9

            @jenniferplusplus Saying “AI can make mistakes” is exactly like saying “An adjustable rate mortgage can increase the interest rate at any time.” It’s not a question of “if”, but “how soon is it possible?”

            n_dimension@infosec.exchangeN 1 Reply Last reply
            0
            • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

              "AI can make mistakes, always check the results"

              I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

              You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

              What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

              Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
              https://thepit.social/@peter/116205452673914720

              danschnau@mastodon.socialD This user is from outside of this forum
              danschnau@mastodon.socialD This user is from outside of this forum
              danschnau@mastodon.social
              wrote last edited by
              #10

              @jenniferplusplus yeah it's a weak ass "CYA" for the AI vendors

              1 Reply Last reply
              0
              • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                "AI can make mistakes, always check the results"

                I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                https://thepit.social/@peter/116205452673914720

                xs4me2@mastodon.socialX This user is from outside of this forum
                xs4me2@mastodon.socialX This user is from outside of this forum
                xs4me2@mastodon.social
                wrote last edited by
                #11

                @jenniferplusplus

                True…

                1 Reply Last reply
                0
                • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                  "AI can make mistakes, always check the results"

                  I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                  You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                  What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                  Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                  https://thepit.social/@peter/116205452673914720

                  lritter@mastodon.gamedev.placeL This user is from outside of this forum
                  lritter@mastodon.gamedev.placeL This user is from outside of this forum
                  lritter@mastodon.gamedev.place
                  wrote last edited by
                  #12

                  @jenniferplusplus scam culture

                  1 Reply Last reply
                  0
                  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                    "AI can make mistakes, always check the results"

                    I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                    You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                    What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                    Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                    https://thepit.social/@peter/116205452673914720

                    maddad@mastodon.worldM This user is from outside of this forum
                    maddad@mastodon.worldM This user is from outside of this forum
                    maddad@mastodon.world
                    wrote last edited by
                    #13

                    @jenniferplusplus

                    It's probably safer and easier to just do the job yourself...

                    1 Reply Last reply
                    0
                    • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                      "AI can make mistakes, always check the results"

                      I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                      You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                      What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                      Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                      https://thepit.social/@peter/116205452673914720

                      acoollady@theatl.socialA This user is from outside of this forum
                      acoollady@theatl.socialA This user is from outside of this forum
                      acoollady@theatl.social
                      wrote last edited by
                      #14

                      @jenniferplusplus They sure came up with an ingenious solution to the trolley problem tho- hide the switch thrower behind a wall and blame the victims for being on the wrong tracks

                      1 Reply Last reply
                      0
                      • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                        "AI can make mistakes, always check the results"

                        I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                        You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                        What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                        Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                        https://thepit.social/@peter/116205452673914720

                        lemmus@social.vivaldi.netL This user is from outside of this forum
                        lemmus@social.vivaldi.netL This user is from outside of this forum
                        lemmus@social.vivaldi.net
                        wrote last edited by
                        #15

                        @jenniferplusplus SMBC Comics had a take on that: https://www.smbc-comics.com/comic/blame

                        1 Reply Last reply
                        0
                        • pikesley@mastodon.me.ukP pikesley@mastodon.me.uk

                          @ozzelot @jenniferplusplus it's all "hallucination", sometimes it's incidentally correct

                          drdrowland@fediscience.orgD This user is from outside of this forum
                          drdrowland@fediscience.orgD This user is from outside of this forum
                          drdrowland@fediscience.org
                          wrote last edited by
                          #16

                          @pikesley @ozzelot @jenniferplusplus

                          and also they're not people so they don't hallucinate either. chatbots produce noise and the vc firms want that to be our fault.

                          1 Reply Last reply
                          0
                          • R relay@relay.publicsquare.global shared this topic
                          • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                            "AI can make mistakes, always check the results"

                            I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                            You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                            What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                            Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                            https://thepit.social/@peter/116205452673914720

                            hypostase@bsd.networkH This user is from outside of this forum
                            hypostase@bsd.networkH This user is from outside of this forum
                            hypostase@bsd.network
                            wrote last edited by
                            #17

                            @jenniferplusplus it's the all care, no responsibility clauses of software licences on speed.
                            Peak billionaire-hoarder techbro, really, not new, just distilled stench.

                            1 Reply Last reply
                            0
                            • emily_s@mastodon.me.ukE emily_s@mastodon.me.uk

                              @jenniferplusplus this. The fact that we allowed companies to get away with "computer says no" for so long led to this point. If we'd beat them around the head a decade to two back, with "and who owns the computer?! Who programmed it?! A human is responsible for this somewhere" then this technology would not have taken off anywhere close to as well.

                              Can you imagine the liability insurance open AI would have to buy if you could sue them for incorrect results?

                              kerravonsen@mastodon.auK This user is from outside of this forum
                              kerravonsen@mastodon.auK This user is from outside of this forum
                              kerravonsen@mastodon.au
                              wrote last edited by
                              #18

                              @emily_s @jenniferplusplus
                              As a computer programmer, yes. There is no such thing as a computer error. It is one or more of:
                              * programmer error
                              * documentation error
                              * user error (with a side-order of either documentation error or "user didn't bother to read the documentation")

                              flippac@types.plF srvanderplas@datavis.socialS 2 Replies Last reply
                              1
                              0
                              • emily_s@mastodon.me.ukE emily_s@mastodon.me.uk

                                @MisuseCase @jenniferplusplus this isn't even that. This was companies setting up their systems so that when the computer says no that's it. They claim they can't do anything about it. Some how they got people to forget that someone programmed that computer to do that. It's not inevitable, it's not carved into the fabric of the universe, it's a few magnetic fields on a disk of rust that a human made and encoded. It can be changed. They just didn't want to and got away with it

                                kerravonsen@mastodon.auK This user is from outside of this forum
                                kerravonsen@mastodon.auK This user is from outside of this forum
                                kerravonsen@mastodon.au
                                wrote last edited by
                                #19

                                @emily_s @MisuseCase @jenniferplusplus

                                I wouldn't actually blame computers for that; it's just one more iteration of the bureaucratic mindset: The Rules say so, and The Rules can't be changed.

                                1 Reply Last reply
                                0
                                • kerravonsen@mastodon.auK kerravonsen@mastodon.au

                                  @emily_s @jenniferplusplus
                                  As a computer programmer, yes. There is no such thing as a computer error. It is one or more of:
                                  * programmer error
                                  * documentation error
                                  * user error (with a side-order of either documentation error or "user didn't bother to read the documentation")

                                  flippac@types.plF This user is from outside of this forum
                                  flippac@types.plF This user is from outside of this forum
                                  flippac@types.pl
                                  wrote last edited by
                                  #20

                                  @kerravonsen @emily_s @jenniferplusplus While Intel were clearly at fault, I think people on the receiving end of the Pentium FDIV bug could reasonably describe that as a computer error

                                  (there are certainly hardware failures of a pernicious nature)

                                  kerravonsen@mastodon.auK 1 Reply Last reply
                                  0
                                  • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                    "AI can make mistakes, always check the results"

                                    I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                                    You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                                    What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                                    Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                                    https://thepit.social/@peter/116205452673914720

                                    tehstu@hachyderm.ioT This user is from outside of this forum
                                    tehstu@hachyderm.ioT This user is from outside of this forum
                                    tehstu@hachyderm.io
                                    wrote last edited by
                                    #21

                                    @jenniferplusplus Yes! Thanks for articulating this, I couldn't put my finger on what annoyed me about it.

                                    1 Reply Last reply
                                    0
                                    • flippac@types.plF flippac@types.pl

                                      @kerravonsen @emily_s @jenniferplusplus While Intel were clearly at fault, I think people on the receiving end of the Pentium FDIV bug could reasonably describe that as a computer error

                                      (there are certainly hardware failures of a pernicious nature)

                                      kerravonsen@mastodon.auK This user is from outside of this forum
                                      kerravonsen@mastodon.auK This user is from outside of this forum
                                      kerravonsen@mastodon.au
                                      wrote last edited by
                                      #22

                                      @flippac @emily_s @jenniferplusplus Fiiiiine, there are also hardware errors; but doesn't that again come back to the human who designed the hardware?

                                      kerravonsen@mastodon.auK flippac@types.plF jenniferplusplus@hachyderm.ioJ 3 Replies Last reply
                                      0
                                      • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                        "AI can make mistakes, always check the results"

                                        I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                                        You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                                        What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                                        Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                                        https://thepit.social/@peter/116205452673914720

                                        nickrauchen@c.imN This user is from outside of this forum
                                        nickrauchen@c.imN This user is from outside of this forum
                                        nickrauchen@c.im
                                        wrote last edited by
                                        #23

                                        @jenniferplusplus

                                        You stated: <<What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not". Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on.>>

                                        Way back in the early 2000s, there was a system called "Dragon Dictate". The goal was to eliminate #human #transcriptionists with automated speech-to-text (sound familiar?) The system had to be trained on your voice and vocabulary. Once properly trained it could do a pretty good job, I'll guess 95-98%. It was better suited to output that was stereotyped (mostly the same), and structured (such as radiology reports and operative notes).

                                        Regardless of how the note/report was generated, the professional who spoke the words had a obligation to at least scan the output and sign it (yes, with an ink pen!). Once signed it became part of the "legal medical record" open to misinterpretation, copying, lawsuits, etc. etc.

                                        Once Dragon Dictate became routine (and they fired all the transcriptionists) I started to notice this little #disclaimer at the bottom:

                                        "If portions of this note are confusing or indecipherable please feel free to call me with questions or concerns." Sounds a lot like #AI to me! I polite way to summarize this is:

                                        👉 They were trying to force me to be their copy-editor. 👈

                                        It cast the entire content in doubt.

                                        Consider for a moment the difference between saying "The scan does not show cancer." and "The scan does show cancer." That "not" is doing a lot of work, and is very easy to miss when you're talking fast and never intend to read your own note ever again.

                                        More subtle is the grammatical error in the first sentence. "This note was #dictated using Dragon text to speech recognition software." Either they changed their product name to "Dragon Text", in which case the capitalization is off. Or they transposed words and it should read "speech to text" or "speech recognition" with no text.

                                        👉 In other words, they didn't even proof-read their own disclaimer! 😱

                                        #MedicalRecords #Medicine #SpeechToText #Liability #Risk #SignalToNoise

                                        Link Preview Image
                                        1 Reply Last reply
                                        0
                                        • jenniferplusplus@hachyderm.ioJ jenniferplusplus@hachyderm.io

                                          "AI can make mistakes, always check the results"

                                          I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.

                                          You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".

                                          What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".

                                          Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
                                          https://thepit.social/@peter/116205452673914720

                                          C This user is from outside of this forum
                                          C This user is from outside of this forum
                                          cresssalad@mastodon.social
                                          wrote last edited by
                                          #24

                                          @jenniferplusplus

                                          And if the LLM is so wrong, and I agree they are wrong a lot, also annoyingly right then suddenly massively wrong.

                                          What does this say about the datasets they are trained on and the training methodology used to build the model.

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups