I suspect the confident-but-wrong behavior of AI agents will lead to major problems as adoption becomes widespread.
-
I suspect the confident-but-wrong behavior of AI agents will lead to major problems as adoption becomes widespread.
These tools confidently create mediocre or incorrect outputs which you might trust if you don’t scrutinize them.
Then once caught they respond “You’re absolutely right” without even learning what they did wrong.
The other day I was explaining that it is a Dunning Krueger machine that knows nothing about areas of your expertise but mysteriously knows everything about all other areas. I was trying to be facetious.
Instead, the guy I was talking to was like “yeah it’s so smart about everything!”
-
@carnage4life Sorry if this is too confrontational but you’ve been talking about and kinda promoting this stuff for quite a while now, and you post this now?
@lari AI agents are a tool not a panacea.
They are good at certain tasks and bad at others, either way their output needs to be reviewed not trusted blindly.
-
R relay@relay.infosec.exchange shared this topic
-
I suspect the confident-but-wrong behavior of AI agents will lead to major problems as adoption becomes widespread.
These tools confidently create mediocre or incorrect outputs which you might trust if you don’t scrutinize them.
Then once caught they respond “You’re absolutely right” without even learning what they did wrong.
@carnage4life A few years back I described them as "that guy on the conference call speaking confidently, plausibly, and erroneously."
It is frustrating how little the interaction has changed since them.
-
I suspect the confident-but-wrong behavior of AI agents will lead to major problems as adoption becomes widespread.
These tools confidently create mediocre or incorrect outputs which you might trust if you don’t scrutinize them.
Then once caught they respond “You’re absolutely right” without even learning what they did wrong.
@carnage4life AI is not "they" but "it." AI cannot "learn" because it cannot "know" - AI is an aggregator built on information stolen from people.
-
@lari AI agents are a tool not a panacea.
They are good at certain tasks and bad at others, either way their output needs to be reviewed not trusted blindly.
@carnage4life Nope. The only "tool" is the person who promotes AI.
-
@carnage4life Nope. The only "tool" is the person who promotes AI.
@Axomamma LOL. Thanks for the laugh this afternoon.

-
I suspect the confident-but-wrong behavior of AI agents will lead to major problems as adoption becomes widespread.
These tools confidently create mediocre or incorrect outputs which you might trust if you don’t scrutinize them.
Then once caught they respond “You’re absolutely right” without even learning what they did wrong.
@carnage4life the "Confident but Wrong" plus the way AI responds to the user "Oh wow, that is a fantastic idea!" "oh, you're so right!" really nailed home that this software was so clearly written by bros.
I had to use it for work for a while, and the obsequious and servile nature of responses is REALLY Creepy.
-
I suspect the confident-but-wrong behavior of AI agents will lead to major problems as adoption becomes widespread.
These tools confidently create mediocre or incorrect outputs which you might trust if you don’t scrutinize them.
Then once caught they respond “You’re absolutely right” without even learning what they did wrong.
@carnage4life an anthropomorphized, better search engine that can type faster than you can. In the past you found posts on the Internet where people were saying slightly wrong, partially correct things about a topic you were looking for. If you knew more than the author you knew they were wrong , if not you had to validate that what they say is true because you didn't know better.
-
I suspect the confident-but-wrong behavior of AI agents will lead to major problems as adoption becomes widespread.
These tools confidently create mediocre or incorrect outputs which you might trust if you don’t scrutinize them.
Then once caught they respond “You’re absolutely right” without even learning what they did wrong.
@carnage4life Yes, and the effect worsens the further your query is from the model training data:
https://techhub.social/@mattjhayes/116402395762959078 -
I suspect the confident-but-wrong behavior of AI agents will lead to major problems as adoption becomes widespread.
These tools confidently create mediocre or incorrect outputs which you might trust if you don’t scrutinize them.
Then once caught they respond “You’re absolutely right” without even learning what they did wrong.
@carnage4life How might they answer
"How many other times have you given the wrong information about this?"
"How often do you give answers that were wrong?"
"How often do you incorporate the correct answer to not make the mistake again?"