"AI can make mistakes, always check the results"
-
@kerravonsen hey just to be clear, you're doing it right now. You're saying the computer is permitted to be wrong. The consequences will land on whoever was able to avoid them, and they will deserve it for not getting out of the way
@jenniferplusplus I am quite confused as to how you concluded that I said that, when I've been pointing out that it is human error
-
"AI can make mistakes, always check the results"
I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.
You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".
What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720LLMs do not make mistakes on their own, you make mistakes using them
> "AI can make mistakes, always check the results"
> I fucking loathe this phrase and everything that goes into it.
Why? It is good advice and important when using LLMs.
I use LLMs every day in my coding practice, and they do make errors (thank you compiler)
LLMs are a tool, and must be wielded. When you use them you are responsible for the results
-
"AI can make mistakes, always check the results"
I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.
You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".
What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720@jenniferplusplus There's a misunderstanding, an "AI can" is like a "worms can", that's the subject. Now it all makes sense.
-
@emily_s @jenniferplusplus
As a computer programmer, yes. There is no such thing as a computer error. It is one or more of:
* programmer error
* documentation error
* user error (with a side-order of either documentation error or "user didn't bother to read the documentation")@kerravonsen @emily_s @jenniferplusplus or a gamma ray and bit flip. But that should probably be caught.
-
@kerravonsen hey just to be clear, you're doing it right now. You're saying the computer is permitted to be wrong. The consequences will land on whoever was able to avoid them, and they will deserve it for not getting out of the way
@jenniferplusplus The computer is wrongly permitted to be wrong. I thought I was agreeing with you.
-
"AI can make mistakes, always check the results"
I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.
You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".
What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720AI *WILL* make mistakes. Do not use.
-
"AI can make mistakes, always check the results"
I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.
You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".
What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720@jenniferplusplus
They want us to pay for a service they won't stand behind. That should tell you everything you need to know. -
@Crystal_Fish_Caves @jenniferplusplus
This does remind me of this fucking weirdness when buying a house:
A lot of the US does not have the government keep track of who owns what land so when you buy a place, you need to also buy insurance that says that you are actually buying it from someone able to sell it.
As far as I can tell every other country just has a department that you can ask "hey is this the owner" and trust the answer.
@gbargoud @Crystal_Fish_Caves @jenniferplusplus if the American insurance industry can find a way to require insurance for something, they will
-
"AI can make mistakes, always check the results"
I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.
You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".
What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720@jenniferplusplus fr fr
-
"AI can make mistakes, always check the results"
I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.
You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".
What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720"What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Agreed... but also.. this is by design
"They" are intentionally designing a system that will (both intentionally & negligently) be used to inflict harms.. while also removing any "accountability" for the harms they inflict
A normal reasonable person sees that old slide deck from IBM about how:
"computers cannot be trusted to make decisions because computers can never be held accountable" as a dystopian warningTech Bros see it as:
"an opportunity to profit from 'Creating the Torment Nexus' while insulating themselves from any consequences for their own actions"