"I just found out that it's been hallucinating numbers this entire time."
-
@Natasha_Jay Sounds plausible.
-
@Natasha_Jay
He he they are using randomization to make chat bot look more "intelligent"
See this:
https://towardsdatascience.com/llms-are-randomized-algorithms/ -
@Natasha_Jay Vibe work is not work.
-
@Epic_Null @Natasha_Jay That's bad, but honestly--switching to a new system without ever double checking anything?
Everyone involved should be fired, including the #AI
@davidr @Epic_Null @Natasha_Jay Fire the bloody management. They keep pushing for "use more AI". If you don't, you are considered to not be a team player, be obstructive, hinder the company and all these things.
-
@Epic_Null @Natasha_Jay @davidr yeah it's a system failure.
the failure is so bad you need to investigate how such a bad decision could have ever been made and you need to change your process@taurus @Epic_Null @Natasha_Jay @davidr but... But... That would lead right up to the board of directors and shareholders. These people are by definition faultless. The eminent purpose of a corporation is to extract wealth without consequences reaching this select group of shitheads.
-
I don't get it. How did people immediately trust AI as soon as our fascist techbro overlords ordered us to?
Most of our friends ask chat GPT for all their important life decisions now.
It takes extremely obvious fuckups like the Flock Superbowl ad to make people pause for a second. Usually, we gobble up whatever the oligarchs ram down our throats. -
@Kierkegaanks @Natasha_Jay
This proves AI can replace CFOs and CEOs!@Kierkegaanks @Natasha_Jay I often think the only people AI can actually replace are CEOs . Waxing about vision, constructing strategies without actual content. No concern for actual truth.
-
@Kierkegaanks @Natasha_Jay
This proves AI can replace CFOs and CEOs!@Nerde @Kierkegaanks @Natasha_Jay That are the easiest to replace people in most bigger companies.
-
@Natasha_Jay How does anyone think LLMs base anything on facts or data? They are plausabiliy machines, designed to flood the zone.
Facts, no. But data, of course. Tons and tons of data, with no ability whatsoever to determine the quality of those data. LLMs learn how *these* kinds of data lead to *those* kinds of output, and that is what they do. They have no way of knowing whether output makes sense, whether it's correct or not, whether it's accurate or not. But they WILL spew out their output with an air of total confidence.
-
Facts, no. But data, of course. Tons and tons of data, with no ability whatsoever to determine the quality of those data. LLMs learn how *these* kinds of data lead to *those* kinds of output, and that is what they do. They have no way of knowing whether output makes sense, whether it's correct or not, whether it's accurate or not. But they WILL spew out their output with an air of total confidence.
@rozeboosje @Natasha_Jay the difference between "(actual) data", aka facts, and "types of data" doing the heavy lifting here. Any data it learns from is a placeholder for the shape of data to use, so it can randomize it freely.
That's the very reason LLMs cannot count the number of vowels in a word. They "know" the expected answer is a low integer (type of data), but have no clue about the actual value (data).
-
@Natasha_Jay "I asked the automatic parrot who makes narrative stories to do my strategic decisions. Guess what, it produced narrative stories.
We are still investigating why the automatic parrot made to generate narrative stories does in fact generate narrative stories."
-
@Quantillion @Natasha_Jay No, an LLM is a toddler that has been reading a lot of books but don’t understand any of them and just likes words that are next to other words, and then you need to be very precise and provide a lot of details in your questions to make it answer anything close to correct, and the next time you ask the same thing the answer is probably different.
But yes, the user bears responsibility as the adult in the relationship.
@toriver @Quantillion @Natasha_Jay Just say "Are you sure" after it generates the answer, and it'll generate an opposite answer immediately.
-
@Natasha_Jay "hallucinating" is such a bad term for "making things up".
-
@Natasha_Jay Well, they got what they deserved. "What do you mean, you didn't read it?"
I'm cheering for all the sceptics that said, "let's wait and see how all this #AI stuff pans out." I love using our new dev tools, they are nice. But, they aren't what #marketing teams are claiming, nor what fanboys are promising. We now have, what we've been promised ages ago. #features have finally been delivered. Delayed, but here now.
Anyway, please continue ...
-
@Natasha_Jay I can't find the thread on Reddit, I'd have loved to read some of the comments to see if they praise AI nonetheless
-
@Kierkegaanks @Natasha_Jay I often think the only people AI can actually replace are CEOs . Waxing about vision, constructing strategies without actual content. No concern for actual truth.
@rmhogervorst Decent amount of middle and upper management too.
-
@lxskllr @GreatBigTable meh, they've probably been fabricating data for the board long before generative AI hit the scene. The only difference is that now they have a scape goat.
@ktneely @lxskllr @GreatBigTable
I think AI is technically a scrape goat.
-
@Natasha_Jay On one hand, yes a lot of technically-minded people saw this coming a couple odd lightyears away, but on the other hand I would love someone paid by this company to continue digging and write an utterly scathing report detailing the nature and extent of the misrepresentations made by the product and sue the vendor for relying on a misunderstanding the product's capabilities for sales (however successful it may be).
-
@Natasha_Jay Does anyone feel sympathy for these people? Because I don’t. 🫤
