@elilla There's a great book by Stefania Tutino called A Fake Saint and the True Church about the forgery of a saint out of letters between Naples and Rome in the 17th C. No AIs were necessary, just lots and lots of letters. As my favourite linguist points out there's no way to guarantee the veridity of discourse at the level of discourse itself. Never has been. AI didn't change that.
onekind@beige.party
Posts
-
generative so-called "AI" is now being used to transcribe and translate Latin manuscripts. -
Canadian immigration appears less evil to me but also a heck lot more incompetent and confusing to deal with@skinnylatte Okay so I had to apply for a work visa in Canada when I'd just been diagnosed with bipolar and suddenly a whole bunch of grants I'd applied for came through. BAD TIMES. Holy *shit* that process is confusing. It took me literal weeks, maybe even months of working through it step by step every day 9-5. Obviously the cognitive issues didn't help but neither did my legal training which refuses to sign or agree to anything unless I understand it 100%. The good news, though, if you understand their logic — 'we are going to scrutinise your application to make sure you're not filling an actual job from which an actual Canadian has recently been sacked' — it eventually makes sense.
-
My wife was trying to explain the concept of the ‘Manosphere’ to her mom.@skinnylatte Previously we just called them preachers
-
The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.@pedromj @riley First, you're assuming that a RAG functions the same way as an LLM. It uses a mix of stochastic and deterministic analysis.
Second, a yes or no answer from a human is also 'fuzzy' in the sense that describing a query in language is never entirely precise, for exactly the reasons I discussed in my previous toot, so the answer given is always 'this is my best guess based on my contingent understanding of your imperfectly phrased question.'
Re your conclusion, I already described the document set as an artificially constructed external reality, which satisfies your objection.
-
The notion of a broken clock being sometimes right is based on a gross misunderstanding of what information is.@riley Riley, are you aware that linguistics in the 60s established language use conveys meaning by reference to other language with no guaranteed relation with some external reality? So all words bear the same relationship with reality a stopped clock has with actual time.
I mention this because LLMs are not designed to provide information about the world, they're designed to generate discourse — language use (its output) that is validly constructed by reference to other language use (its training dataset). It's not fair to judge an LLM on the basis it's a lousy search engine.
But if you spin up a RAG like NotebookLM and give it a reality to refer to (a set of documents) and then ask it a question i.e. is XYZ in the document set, turns out LLMs can do a pretty good job of accurately answering yes or no.
-
I spent a good 30sec trying to puzzle out what kind of kink garment this depicts.I spent a good 30sec trying to puzzle out what kind of kink garment this depicts.