lol, "if only someone had warned us about this sort of thing?!"
-
@Lazarou
I wish they'd stop calling it "#AI".It isn't. Not even close. I went to college to study #AI decades ago.
What they are calling "AI" today is nothing more than "deep database scrubbing".
It *assumes* an answer is correct based simply on the number of results it finds supporting that conclusion.
#GIGO: Garbage In; Garbage Out.
@MugsysRapSheet @Lazarou AI has always been a marketing term. What was "machine learning" back when you were probably studying bares little resemblance to Large Language models, but it all get lumped in the same bucket.
Even when criticizing it, we're encouraged to use terms like "hallucinations" that anthropomorphise the systems, instead of using more correct terms like "statistical error".
-
-
@Lazarou I'm sorry but this is hilarious




-
@Lazarou God I hope this post is true
-
@Lazarou
So β¦ whatβs the problem? It functions exactly as designed! -
@Lazarou βWe ran into some trouble setting up an office in West Fobispa. Turned out there was no such place. The whole state of Warmington was a hallucination. That was half of our profits the last fiscal year!β
-
@Lazarou βWe ran into some trouble setting up an office in West Fobispa. Turned out there was no such place. The whole state of Warmington was a hallucination. That was half of our profits the last fiscal year!β
@su_liam such incredible faith they put into a machine that they wouldn't put into another human being
-
-
I know this probably makes me a terrible person but I honestly can't _wait_ for the lawsuits.
Added bonus laughs if they use AI to write their legal briefs.
-
@Lazarou I saw a few times in my paid for AI not only data is false but also math - for example the inability to convert base2 and base10, or the inability to differentiate meaning in small and large letters in (throughput) calculations - KB Kb. Basically corrupting the web with millions of "small" errors that humans mostly recognise.
-
@glent@aus.social @MugsysRapSheet @Lazarou No. that would be the "normalized answer" and not necessarily correct at all. Just "average".
-
-
@Lazarou "double checking" and "accident".
EPIC.
seems like today double checking is some kind of failure
-
@Lazarou meanwhile my ceo
@agasramirez @Lazarou all CEOs went to the same "cool kids" CEO school.
-
-
-
@agasramirez oh wow, I've heard talk of this attitude but to actually see it in action!
@Lazarou As a CEO (which I am not) I would be happy, if there were still persons in my company, who are not totally relying on LLMs.
@agasramirez -
@Lazarou I have my doubts about this post.
I do not doubt that AI can and will produce fake insights. A very easy way to get that is to ask an AI questions whose answers are not in the numbers youβre feeding it. Or your questions are heavily biased towards things you want to hear (βShow me how campaign X increased saleβ).
But unless youβre completely ignorant of your own business, you will notice it quickly. Especially if it goes on for 3 months. I deal with a lot of managers and every single one of them looks through raw data. Every system (with or without AI) is faulty and they fine comb every bit that influences their salary relevant metrics.
-
@Lazarou I don't get how LLMs are needed for analytics? This field is basically about counting numbers, statistics magic and visualization. Isn't it?
I mean, asking an LLM what kind of statistical sorcery would be needed to measure this and that and ask it for a small python script will probably work (if you double check everything). But what did they do, to come to a situation where an LLM makes up data for months?

