generative so-called "AI" is now being used to transcribe and translate Latin manuscripts.
-
generative so-called "AI" is now being used to transcribe and translate Latin manuscripts.
guess the results:
https://blacksky.community/profile/did:plc:vpepa7tkir7iaia3risvyjc6/post/3mhntbkyadc24Dan Conway (@magisterconway.bsky.social)
Update: it gets so much worse. All of Bede's letters have no basis in the original Latin at all. This one reads like a modern email! (Original Latin, real translation, fake "translation")
Blacksky (blacksky.community)
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
@elilla "signal-shaped noise" is a very apt description. Thank you.
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
@elilla A BANGER POST
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
@elilla I am continuously arguing the accuracy debt angle. I have made significant strides in some areas of building ai 'bulkheads'.
Low background radiation steel is a good analogy I use. There is before, and there is after.
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
@elilla There's a great book by Stefania Tutino called A Fake Saint and the True Church about the forgery of a saint out of letters between Naples and Rome in the 17th C. No AIs were necessary, just lots and lots of letters. As my favourite linguist points out there's no way to guarantee the veridity of discourse at the level of discourse itself. Never has been. AI didn't change that.
-
R relay@relay.infosec.exchange shared this topic
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
@elilla transcription / translation is one of the areas where I see a good use for LLMs at the moment. But, only as a first-pass.
I use Speech Note to do a first pass at transcribing audio from talks and such that I will write about. But I also go back and watch the talk and clean up the transcript -- I'm not blindly trusting the output, I'm just trying to speed up the act of typing it out and saving some wear and tear on my hands.
An LLM-generated translation or transcription that is not verified is, IMO, generally a dangerous thing. It might be fine for local use to try to get the gist of something, but no organization should be publishing those types of things without verification.
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
@elilla Signal-shaped noise is a great term, thank you for that one.
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
@elilla "Signal-shaped noise" is an utterly brilliant characterization of what "gen AI" produces.
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
@elilla@transmom.love I'm a data engineer. I've been saying for years if not decades: "Bad data is worse than no data". And, generally, when people hear that, they agree with me.
When I point out that genAI produces bad data, the turnaround to "oh, but, so useful", "early days", etc, is quick and disheartening.
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
@elilla @jacel As someone who likes using (but not remotely relying on) automated transcription and notetaking that way... as far as I'm concerned, if anyone's *training* on that stuff, then they deserve exactly what they'll get. And if whatever big corporation is *putting that stuff in training sets*, then they need to quit shitting where they eat.
-
@elilla @jacel As someone who likes using (but not remotely relying on) automated transcription and notetaking that way... as far as I'm concerned, if anyone's *training* on that stuff, then they deserve exactly what they'll get. And if whatever big corporation is *putting that stuff in training sets*, then they need to quit shitting where they eat.
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
@elilla SIGNAL-SHAPED NOISE
-
@elilla transcription / translation is one of the areas where I see a good use for LLMs at the moment. But, only as a first-pass.
I use Speech Note to do a first pass at transcribing audio from talks and such that I will write about. But I also go back and watch the talk and clean up the transcript -- I'm not blindly trusting the output, I'm just trying to speed up the act of typing it out and saving some wear and tear on my hands.
An LLM-generated translation or transcription that is not verified is, IMO, generally a dangerous thing. It might be fine for local use to try to get the gist of something, but no organization should be publishing those types of things without verification.
@elilla I experimented with using ChatGPT to do OCR on old scanned assembly code listings.
Columnar text has always been a huge challenge for OCR, and I had already tried Tesseract and given up on it.
At first I thought the results from ChatGPT were a revolutionary leap in the state of the art.
Then I looked closer - it had reworded the comments and headers. It even changed the code in places, swapping out entire mnemonics and parameters.
Like any good sloperator I tried to prompt may way around this, which was met by effusive apologies and assurances that it would, going forward, be sure to never do that again.
Which of course, it immediately did.
I suspect there's only the most tenuous thread of context between a "multi-modal" LLM's text and image capabilities - they're basically just two models duct-taped together.
I find this particularly disturbing as if someone simply doing an editorial pass looking for spelling or grammar errors may not notice that the content appears fundamentally correct, but was actually altered.
I would rather wade through a sea of Tesseract's obvious typos than have to take on the much higher cognitive burden of making sure grammatically correct sentences weren't invented wholesale.
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
@elilla Earlier today I reflected on how AI generated closed captions on local news here in Sweden are too exact. When a human does them in Sweden they remove filler words and repeat words. When they suddenly are there it takes more cognitive effort to read what people are saying.
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
@elilla Wrong information is so not better than nothing.

-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
Thing is, our myths and literature have been telling us this for millennia!
*All* the oracle stories involve an oracle saying something ambiguous, which the protagonist dangerously misinterprets. It will always be mushy, you'll always choose the wrong interpretation, and it will always be your fault. In that sense, saying "you have to check the AI result" is a threat, meaning the AI is free to make mistakes, but you will be held liable.
This is not positive information; it is almost *negative* information in that we still don't know the truth, but are tempted into dangerous fantasies of misinterpretation.
We've even turned the whole mess into a cautionary tale with the "ibis redibis" story of the oracle at Dodona, a caution heeded nowadays by almost nobody:
-
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
@elilla I have direct experience of this. There's a handwritten letter from my grandfather dated around 1914 that turned up in a box of stuff. It's in cursive and younger people are less familiar with cursive so a family member put it through chatgtp. The result was, as you'd expect, vaguely similar to what was written, with some alarming inaccuracies. And it missed the actual point he was writing about.
I'm old enough to read cursive and I've had some recent experience making out other old writing in much worse hand, so I could read it quite well. A couple of words were hard to decipher but not impossible.
So my conclusion was that the AI transcription was worse than useless. -
"AI" users are like, "I know this is imprecise but as a convenience these transcriptions are better than nothing"
then 70 years from now we'll still be struggling to debunk these entirely hallucinated transcriptions of thousands of manuscripts that were pissed into the pool of human knowledge.
some things are worse than nothing. "signal-shaped noise" is worse than nothing.
-
