"AI can make mistakes, always check the results"
-
"AI can make mistakes, always check the results"
I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.
You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".
What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720@jenniferplusplus SMBC Comics had a take on that: https://www.smbc-comics.com/comic/blame
-
@ozzelot @jenniferplusplus it's all "hallucination", sometimes it's incidentally correct
@pikesley @ozzelot @jenniferplusplus
and also they're not people so they don't hallucinate either. chatbots produce noise and the vc firms want that to be our fault.
-
R relay@relay.publicsquare.global shared this topic
-
"AI can make mistakes, always check the results"
I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.
You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".
What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720@jenniferplusplus it's the all care, no responsibility clauses of software licences on speed.
Peak billionaire-hoarder techbro, really, not new, just distilled stench. -
@jenniferplusplus this. The fact that we allowed companies to get away with "computer says no" for so long led to this point. If we'd beat them around the head a decade to two back, with "and who owns the computer?! Who programmed it?! A human is responsible for this somewhere" then this technology would not have taken off anywhere close to as well.
Can you imagine the liability insurance open AI would have to buy if you could sue them for incorrect results?
@emily_s @jenniferplusplus
As a computer programmer, yes. There is no such thing as a computer error. It is one or more of:
* programmer error
* documentation error
* user error (with a side-order of either documentation error or "user didn't bother to read the documentation") -
@MisuseCase @jenniferplusplus this isn't even that. This was companies setting up their systems so that when the computer says no that's it. They claim they can't do anything about it. Some how they got people to forget that someone programmed that computer to do that. It's not inevitable, it's not carved into the fabric of the universe, it's a few magnetic fields on a disk of rust that a human made and encoded. It can be changed. They just didn't want to and got away with it
@emily_s @MisuseCase @jenniferplusplus
I wouldn't actually blame computers for that; it's just one more iteration of the bureaucratic mindset: The Rules say so, and The Rules can't be changed.
-
@emily_s @jenniferplusplus
As a computer programmer, yes. There is no such thing as a computer error. It is one or more of:
* programmer error
* documentation error
* user error (with a side-order of either documentation error or "user didn't bother to read the documentation")@kerravonsen @emily_s @jenniferplusplus While Intel were clearly at fault, I think people on the receiving end of the Pentium FDIV bug could reasonably describe that as a computer error
(there are certainly hardware failures of a pernicious nature)
-
"AI can make mistakes, always check the results"
I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.
You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".
What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720@jenniferplusplus Yes! Thanks for articulating this, I couldn't put my finger on what annoyed me about it.
-
@kerravonsen @emily_s @jenniferplusplus While Intel were clearly at fault, I think people on the receiving end of the Pentium FDIV bug could reasonably describe that as a computer error
(there are certainly hardware failures of a pernicious nature)
@flippac @emily_s @jenniferplusplus Fiiiiine, there are also hardware errors; but doesn't that again come back to the human who designed the hardware?
-
"AI can make mistakes, always check the results"
I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.
You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".
What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720You stated: <<What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not". Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on.>>
Way back in the early 2000s, there was a system called "Dragon Dictate". The goal was to eliminate #human #transcriptionists with automated speech-to-text (sound familiar?) The system had to be trained on your voice and vocabulary. Once properly trained it could do a pretty good job, I'll guess 95-98%. It was better suited to output that was stereotyped (mostly the same), and structured (such as radiology reports and operative notes).
Regardless of how the note/report was generated, the professional who spoke the words had a obligation to at least scan the output and sign it (yes, with an ink pen!). Once signed it became part of the "legal medical record" open to misinterpretation, copying, lawsuits, etc. etc.
Once Dragon Dictate became routine (and they fired all the transcriptionists) I started to notice this little #disclaimer at the bottom:
"If portions of this note are confusing or indecipherable please feel free to call me with questions or concerns." Sounds a lot like #AI to me! I polite way to summarize this is:
They were trying to force me to be their copy-editor.
It cast the entire content in doubt.
Consider for a moment the difference between saying "The scan does not show cancer." and "The scan does show cancer." That "not" is doing a lot of work, and is very easy to miss when you're talking fast and never intend to read your own note ever again.
More subtle is the grammatical error in the first sentence. "This note was #dictated using Dragon text to speech recognition software." Either they changed their product name to "Dragon Text", in which case the capitalization is off. Or they transposed words and it should read "speech to text" or "speech recognition" with no text.
In other words, they didn't even proof-read their own disclaimer!
#MedicalRecords #Medicine #SpeechToText #Liability #Risk #SignalToNoise
-
"AI can make mistakes, always check the results"
I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.
You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".
What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720And if the LLM is so wrong, and I agree they are wrong a lot, also annoyingly right then suddenly massively wrong.
What does this say about the datasets they are trained on and the training methodology used to build the model.
-
@flippac @emily_s @jenniferplusplus Fiiiiine, there are also hardware errors; but doesn't that again come back to the human who designed the hardware?
@flippac @emily_s @jenniferplusplus
See also the Year 2038 problem. https://en.wikipedia.org/wiki/Year_2038_problem -- is that a computer error or a programmer error? -
"AI can make mistakes, always check the results"
I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.
You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".
What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720@jenniferplusplus@hachyderm.io Also, I feel it just undermines LLM being actually useful if I had to manually search it up to verify it.
-
@flippac @emily_s @jenniferplusplus Fiiiiine, there are also hardware errors; but doesn't that again come back to the human who designed the hardware?
@kerravonsen @emily_s @jenniferplusplus Not always: sometimes it's being used outside the design spec, sometimes that's because the design spec wasn't communicated clearly but not always, etc etc.
"When someone says 'computer error' rather than something more specific they're probably full of it" I'm fine with, but one of the realities of computing machines as opposed to the mathematical abstraction of computing is that like all machines they have a non-zero failure rate - even if it's pretty damn tiny.
Now, the amount of shite practice out there re error tolerance/resilience? Sure, we can talk about that (or skip it, because neither of us are newbies here). But bitflips absolutely happen in the wild, especially if someone didn't realise what it really took to keep their machine cool enough.
-
@flippac @emily_s @jenniferplusplus
See also the Year 2038 problem. https://en.wikipedia.org/wiki/Year_2038_problem -- is that a computer error or a programmer error?@kerravonsen @emily_s @jenniferplusplus BCD existed: if I'm old enough to talk about FDIV I certainly remember the long buildup to Y2K (including everyone running into it while computing about the future)
-
@kerravonsen @emily_s @jenniferplusplus BCD existed: if I'm old enough to talk about FDIV I certainly remember the long buildup to Y2K (including everyone running into it while computing about the future)
@kerravonsen @emily_s @jenniferplusplus The Epochalypse specifically is worse, mind: it's an entirely reasonable (initially implicit-spec) "holy shit we did not build this to work for that long and you did it anyway" problem that originated when the relevant software wasn't a piece of critical infrastructure.
For banks and the like, Y2K was expected long-term maintenance.
The epochalypse is, realistically, user error.
-
R relay@relay.mycrowd.ca shared this topicR relay@relay.infosec.exchange shared this topic
-
@jenniferplusplus Saying “AI can make mistakes” is exactly like saying “An adjustable rate mortgage can increase the interest rate at any time.” It’s not a question of “if”, but “how soon is it possible?”
@mighty_orbot @jenniferplusplus
I would really love to live in your world.
Humans around me fuck up all the time.
Most of the time they will won't even apologise when they are sprung on their "hallucination"And they don't come with a warning sticker
-
@jenniferplusplus right?! What else would you buy if right on the lable it said "this may not be what we say it is" ??
So it may not be correct information, you don't know which part. You are using it to not have to do the legwork yourself. Do you
Take what it gave you, fingers crossed the wrong bits are not too bad
Or
Do legwork to figure out what is wrong defeating the purpose?
AND how do know your source is correct?#Ai continuing to learn will keep reintroducing bogusness exponentially!?
@Crystal_Fish_Caves @jenniferplusplus
This does remind me of this fucking weirdness when buying a house:
A lot of the US does not have the government keep track of who owns what land so when you buy a place, you need to also buy insurance that says that you are actually buying it from someone able to sell it.
As far as I can tell every other country just has a department that you can ask "hey is this the owner" and trust the answer.
-
@flippac @emily_s @jenniferplusplus Fiiiiine, there are also hardware errors; but doesn't that again come back to the human who designed the hardware?
@kerravonsen hey just to be clear, you're doing it right now. You're saying the computer is permitted to be wrong. The consequences will land on whoever was able to avoid them, and they will deserve it for not getting out of the way
-
"AI can make mistakes, always check the results"
I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat.
You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results".
What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not".
Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on:
https://thepit.social/@peter/116205452673914720I think that being liable for the mistakes an AI that you use is only fair... They who live by the sword etc.
-
I think that being liable for the mistakes an AI that you use is only fair... They who live by the sword etc.
@Daniel_Blake @jenniferplusplus The problems start if you aren't using the AI because you want to, but because you got ordered to use it.
Cory Doctorow has written a lot about what he calls Reverse Centaurs - persons having to work for a machine instead of persons using a machine. For instance:
https://pluralistic.net/2025/12/05/pop-that-bubble/#u-washington