Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Machine translations are often brought up as a gotcha whenever I criticize LLMs.

Machine translations are often brought up as a gotcha whenever I criticize LLMs.

Scheduled Pinned Locked Moved Uncategorized
15 Posts 5 Posters 7 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • gargron@mastodon.socialG gargron@mastodon.social

    Machine translations are often brought up as a gotcha whenever I criticize LLMs. It's worth pointing out two things: Machine translations existed decades before LLMs, and yes, machine translations are useful. However: I would never in my life read a machine translated book. Understanding what a social media post is talking about in rough terms? Sure. Literature? Absolutely not. Hell, have you ever seen machine translated subtitles? It's absolute garbage.

    galaxis@mastodon.infra.deG This user is from outside of this forum
    galaxis@mastodon.infra.deG This user is from outside of this forum
    galaxis@mastodon.infra.de
    wrote last edited by
    #4

    @Gargron Machine translated UIs are even worse a crime. LLMs don't have the slightest idea of the context of some random button, and (looking at Microsoft's German UI translations recently) seem to choose the worst possible word to drop into that.

    1 Reply Last reply
    0
    • gargron@mastodon.socialG gargron@mastodon.social

      Machine translations are often brought up as a gotcha whenever I criticize LLMs. It's worth pointing out two things: Machine translations existed decades before LLMs, and yes, machine translations are useful. However: I would never in my life read a machine translated book. Understanding what a social media post is talking about in rough terms? Sure. Literature? Absolutely not. Hell, have you ever seen machine translated subtitles? It's absolute garbage.

      gargron@mastodon.socialG This user is from outside of this forum
      gargron@mastodon.socialG This user is from outside of this forum
      gargron@mastodon.social
      wrote last edited by
      #5

      I have the impression that primarily anglophone people don't read as much translated literature, because so much good literature already exists in their language, so this issue may not be as familiar within that demographic. As someone who did not grow up anglophone, I can tell you there is a world of difference between a good and a bad translation even when done by humans. Machine translations are not even on the scale.

      gargron@mastodon.socialG 1 Reply Last reply
      0
      • gargron@mastodon.socialG gargron@mastodon.social

        I have the impression that primarily anglophone people don't read as much translated literature, because so much good literature already exists in their language, so this issue may not be as familiar within that demographic. As someone who did not grow up anglophone, I can tell you there is a world of difference between a good and a bad translation even when done by humans. Machine translations are not even on the scale.

        gargron@mastodon.socialG This user is from outside of this forum
        gargron@mastodon.socialG This user is from outside of this forum
        gargron@mastodon.social
        wrote last edited by
        #6

        From what I've observed, people who claim that LLMs can replace artists don't understand art, people who claim that they can replace musicians don't understand music, people who claim that they can replace writers don't understand literature, and people who claim they can replace translators don't rely on translations. If I had a button that would erase LLMs from the world but it would take machine translations away (which is a false dichotomy anyway), I would absolutely still press it.

        df@s.dfaria.euD 1 Reply Last reply
        0
        • gargron@mastodon.socialG gargron@mastodon.social

          From what I've observed, people who claim that LLMs can replace artists don't understand art, people who claim that they can replace musicians don't understand music, people who claim that they can replace writers don't understand literature, and people who claim they can replace translators don't rely on translations. If I had a button that would erase LLMs from the world but it would take machine translations away (which is a false dichotomy anyway), I would absolutely still press it.

          df@s.dfaria.euD This user is from outside of this forum
          df@s.dfaria.euD This user is from outside of this forum
          df@s.dfaria.eu
          wrote last edited by
          #7

          @Gargron But it seems that LLMs are here to stay. This time, it doesn't seem to be just a passing fad. There is a lot of investment involved.

          abucci@buc.ciA 1 Reply Last reply
          0
          • df@s.dfaria.euD df@s.dfaria.eu

            @Gargron But it seems that LLMs are here to stay. This time, it doesn't seem to be just a passing fad. There is a lot of investment involved.

            abucci@buc.ciA This user is from outside of this forum
            abucci@buc.ciA This user is from outside of this forum
            abucci@buc.ci
            wrote last edited by
            #8
            @df@s.dfaria.eu @Gargron Investment by people who only four years ago were telling us blockchain was inevitable and here to stay, and who have been telling us fully autonomous self-driving cars were right around the corner for twenty years. At some point you have to pull your head out of the fog and recognize that all of this is nothing more than marketing. Of course they want to convince you that this time it's different, this time X or Y is here to stay---because they become very wealthy if most of us believe that. That doesn't make it true.
            df@s.dfaria.euD 1 Reply Last reply
            0
            • abucci@buc.ciA abucci@buc.ci
              @df@s.dfaria.eu @Gargron Investment by people who only four years ago were telling us blockchain was inevitable and here to stay, and who have been telling us fully autonomous self-driving cars were right around the corner for twenty years. At some point you have to pull your head out of the fog and recognize that all of this is nothing more than marketing. Of course they want to convince you that this time it's different, this time X or Y is here to stay---because they become very wealthy if most of us believe that. That doesn't make it true.
              df@s.dfaria.euD This user is from outside of this forum
              df@s.dfaria.euD This user is from outside of this forum
              df@s.dfaria.eu
              wrote last edited by
              #9

              @abucci The investment I was referring to was not the recent venture capital trend. What I had in mind was investment in artificial intelligence research, which has been a major academic and scientific endeavor since the 1950s, with pioneers such as Alan Turing laying the groundwork.

              df@s.dfaria.euD 1 Reply Last reply
              0
              • df@s.dfaria.euD df@s.dfaria.eu

                @abucci The investment I was referring to was not the recent venture capital trend. What I had in mind was investment in artificial intelligence research, which has been a major academic and scientific endeavor since the 1950s, with pioneers such as Alan Turing laying the groundwork.

                df@s.dfaria.euD This user is from outside of this forum
                df@s.dfaria.euD This user is from outside of this forum
                df@s.dfaria.eu
                wrote last edited by
                #10

                @abucci Current models are part of a long history of research in machine learning, computer science, and computational linguistics. Of course, there is marketing around it, but reducing the entire field to a passing marketing fad ignores decades of serious scientific work.

                abucci@buc.ciA 1 Reply Last reply
                0
                • df@s.dfaria.euD df@s.dfaria.eu

                  @abucci Current models are part of a long history of research in machine learning, computer science, and computational linguistics. Of course, there is marketing around it, but reducing the entire field to a passing marketing fad ignores decades of serious scientific work.

                  abucci@buc.ciA This user is from outside of this forum
                  abucci@buc.ciA This user is from outside of this forum
                  abucci@buc.ci
                  wrote last edited by
                  #11
                  @df@s.dfaria.eu @abucc I have a PhD in computer science and did research under the umbrella of AI. I have a decent sense of the state of the art. I stand by my earlier post. You are correct that there is intellectual investment in AI as well, but my own view is that a bunch of the so-called research is in fact fake, or if not fake then of dubious quality. Meanwhile, all the other diverse subareas of computer science continue just fine without LLMs and are able to prudce results with provable guarantees, something current LLMs cannot do and may never be able to do. So, again there is a fog of hype here that we need to pull our heads out of to see clearly.

                  Regarding my claim that some of the research is fake, arXiv recently stopped accepting submissions to their computer science category because it was being overwhelmed with slop submissions and what amount to corporate whitepapers that would never hold up if submitted to a proper scientific journal. Nature Publishing Group, a formerly prestigious scientific publisher, has been horrible about promoting low-quality corporate marketing that pretends to be science. The money and marketing penetrates there too.
                  df@s.dfaria.euD 1 Reply Last reply
                  0
                  • abucci@buc.ciA abucci@buc.ci
                    @df@s.dfaria.eu @abucc I have a PhD in computer science and did research under the umbrella of AI. I have a decent sense of the state of the art. I stand by my earlier post. You are correct that there is intellectual investment in AI as well, but my own view is that a bunch of the so-called research is in fact fake, or if not fake then of dubious quality. Meanwhile, all the other diverse subareas of computer science continue just fine without LLMs and are able to prudce results with provable guarantees, something current LLMs cannot do and may never be able to do. So, again there is a fog of hype here that we need to pull our heads out of to see clearly.

                    Regarding my claim that some of the research is fake, arXiv recently stopped accepting submissions to their computer science category because it was being overwhelmed with slop submissions and what amount to corporate whitepapers that would never hold up if submitted to a proper scientific journal. Nature Publishing Group, a formerly prestigious scientific publisher, has been horrible about promoting low-quality corporate marketing that pretends to be science. The money and marketing penetrates there too.
                    df@s.dfaria.euD This user is from outside of this forum
                    df@s.dfaria.euD This user is from outside of this forum
                    df@s.dfaria.eu
                    wrote last edited by
                    #12

                    @abucci Right, I see your point. But what solution are you proposing? Banning the use of AI? Surely there must be other ways? We could try to educate people on the ethical and responsible use of AI...

                    abucci@buc.ciA 1 Reply Last reply
                    0
                    • df@s.dfaria.euD df@s.dfaria.eu

                      @abucci Right, I see your point. But what solution are you proposing? Banning the use of AI? Surely there must be other ways? We could try to educate people on the ethical and responsible use of AI...

                      abucci@buc.ciA This user is from outside of this forum
                      abucci@buc.ciA This user is from outside of this forum
                      abucci@buc.ci
                      wrote last edited by
                      #13
                      @df@s.dfaria.eu Demanding that a person analyzing a situation should also immediately provide a solution does not make sense. Why are you making this demand? I am not a policymaker, nor a dictator. We should make this decision collectively, in a way that's fair and reasonable, while taking full account of the facts as we know them. One way of taking full account of the facts---and therefore making better decisions---is clearing away hype, mania, illusion, con artistry, etc., which is something I attempt to do.

                      What's the problem with banning things? We've banned asbestos. We've banned smoking cigarettes in certain locations. We ban dangerous things all the time. We ban unethical things too, for instance Ponzi schemes (in theory). If AI is irredeemably bad, why shouldn't we ban it?

                      In any case, if you're interested in "ethical use of AI", how do you suggest it is possible to ethically use this technology? It's been built on stolen material and the labor of underpaid content taggers who now have PTSD from their work, and is repeatedly being promoted with lies. How is it ethical to use a technology that is causing people's electric bills to double or triple to pay for data centers, and that is causing water crises in an increasing number of towns across the US? Among other deep issues, such as being implicated in numerous assaults, murders, and suicides. As far as I can tell, as it's currently constituted AI is an ugly and destructive technology down to its very core, and it's hard to see how it can be used ethically unless one perverts the meaning of the word "ethics" so far that it becomes meaningless.
                      1 Reply Last reply
                      1
                      0
                      • R relay@relay.infosec.exchange shared this topic
                      • df@s.dfaria.euD This user is from outside of this forum
                        df@s.dfaria.euD This user is from outside of this forum
                        df@s.dfaria.eu
                        wrote last edited by
                        #14

                        @abucci I’m not making any demands. What I’m pushing back against is the tone of this recent wave of Mastodon posts, which seem to jump very quickly to blanket prohibition as the supposed “solution” to AI.

                        Of course AI has risks and real problems. No serious person denies that. But proposing to simply ban AI is neither realistic nor particularly helpful. AI isn’t a single product or substance that can be neatly removed from society; it’s a broad set of techniques already embedded across science, medicine, infrastructure, and everyday software. Calling to ban it is like trying to stop the wind with your hands.

                        The asbestos analogy doesn’t hold either. Asbestos is intrinsically harmful: its normal use causes serious health damage. AI is not that kind of thing. Treating them as equivalent is a false analogy. AI is much closer to a tool. Like a knife, it can be used harmfully or constructively. The ethical question is not whether the tool exists, but how it is built, governed, and used.

                        You’re also presenting claims about “AI” being built on stolen material and exploited labor as if they applied to the entire field. Some of those criticisms are valid in specific cases, especially regarding certain corporate practices. But generalizing them to all AI development ignores the existence of university research, open-source projects, and systems trained on licensed or public datasets.

                        What’s striking is that your argument completely ignores the positive applications that already exist: AI assisting medical diagnosis, enabling accessibility tools for disabled users, improving translation between languages, accelerating scientific research, helping analyze complex datasets, or supporting education. You may think some of those benefits are overstated, but pretending they don’t exist at all weakens your argument rather than strengthening it.

                        If we want a serious ethical discussion, the relevant question isn’t “should we ban AI?” but which uses should be restricted or prohibited, and under what rules the rest should operate. That’s precisely the direction policymakers are taking, for example with the European Union AI Act @EUCommission which regulates AI according to risk levels and bans specific harmful uses rather than treating the entire technology as inherently illegitimate.

                        abucci@buc.ciA 1 Reply Last reply
                        0
                        • df@s.dfaria.euD df@s.dfaria.eu

                          @abucci I’m not making any demands. What I’m pushing back against is the tone of this recent wave of Mastodon posts, which seem to jump very quickly to blanket prohibition as the supposed “solution” to AI.

                          Of course AI has risks and real problems. No serious person denies that. But proposing to simply ban AI is neither realistic nor particularly helpful. AI isn’t a single product or substance that can be neatly removed from society; it’s a broad set of techniques already embedded across science, medicine, infrastructure, and everyday software. Calling to ban it is like trying to stop the wind with your hands.

                          The asbestos analogy doesn’t hold either. Asbestos is intrinsically harmful: its normal use causes serious health damage. AI is not that kind of thing. Treating them as equivalent is a false analogy. AI is much closer to a tool. Like a knife, it can be used harmfully or constructively. The ethical question is not whether the tool exists, but how it is built, governed, and used.

                          You’re also presenting claims about “AI” being built on stolen material and exploited labor as if they applied to the entire field. Some of those criticisms are valid in specific cases, especially regarding certain corporate practices. But generalizing them to all AI development ignores the existence of university research, open-source projects, and systems trained on licensed or public datasets.

                          What’s striking is that your argument completely ignores the positive applications that already exist: AI assisting medical diagnosis, enabling accessibility tools for disabled users, improving translation between languages, accelerating scientific research, helping analyze complex datasets, or supporting education. You may think some of those benefits are overstated, but pretending they don’t exist at all weakens your argument rather than strengthening it.

                          If we want a serious ethical discussion, the relevant question isn’t “should we ban AI?” but which uses should be restricted or prohibited, and under what rules the rest should operate. That’s precisely the direction policymakers are taking, for example with the European Union AI Act @EUCommission which regulates AI according to risk levels and bans specific harmful uses rather than treating the entire technology as inherently illegitimate.

                          abucci@buc.ciA This user is from outside of this forum
                          abucci@buc.ciA This user is from outside of this forum
                          abucci@buc.ci
                          wrote last edited by
                          #15
                          @df@s.dfaria.eu
                          What I’m pushing back against is the tone of this recent wave of Mastodon posts, which seem to jump very quickly to blanket prohibition as the supposed “solution” to AI.
                          I made no such statement, and reading "tone" on the internet has been shown over and over again to be nearly impossible. It is quite odd for you to vent your frustrations at me over a perceived pattern on Mastodon, especially on a post of mine that is not exhibiting what you're frustrated about.

                          But proposing to simply ban AI is neither realistic nor particularly helpful.
                          Incorrect on both counts. As I suggested in my previous post, we have successfully banned products before that were found to be unacceptable. What's neither realistic nor helpful is this style of fatalism, arguing that something that's quite possible and has been done before is somehow not possible or realistic anymore.

                          AI is much closer to a tool. Like a knife, it can be used harmfully or constructively.
                          The AI at issue, LLM and image-based generative AI, represents a political project. That political project could be ended. "It's just a tool" is the cover story used by people unwilling to acknowledge the politics of it, or those who have an interest in furthering it without copping to it. It's the same form as the "gun's don't kill people, people kill people" argument we hear in the US every time someone shoots up a school, and is fallacious and ahistorical.

                          You’re also presenting claims about “AI” being built on stolen material and exploited labor as if they applied to the entire field.
                          You did catch the fact that I have a background in AI and do not need to be told things about my own field? I qualified one portion of my post with generative AI/LLMs. If you need me to label every single claim with the precise piece of technology I'm referring to I can do that, but it seems ludicrous to me--clearly we are not discussing expert systems or case-based reasoning. Mastodon is not aflutter with posts about banning partial order planning. The discourse is about generative AI/LLMs, so for brevity I left off that qualifier in most places. I feel you're moving the goalposts in your attempt to argue with my non-argument.

                          What’s striking is that your argument completely ignores
                          I am taking this to be trending into bad faith territory. I've made no argument and am not looking for one, but you seem very keen on having one anyway. I've pointed out known issues with generative AI/LLMs, made an aesthetic statement ("AI is ugly"), made a factual statement supported by evidence ("AI is destructive"), and stated I have difficulty seeing how this particular form of AI could be used ethically (a statement about my own shortcomings!). It's fine if you do not care about how I feel about generative AI, but it'd help everyone if you read the words as saying what they were written to say instead of reading something different and seemingly agenda driven into them.

                          enabling accessibility tools for disabled users
                          It is deeply offensive to use disabled people as a pawn in an online dispute, and I will not take part in this. When I said that I thought AI is ugly, one of the many observations that led me to this conclusion is that people frequently say and do ugly things in support of it.

                          improving translation between languages
                          Can be done without the current generation of LLMs or even AI, and better for non-English language pairs.

                          accelerating scientific research
                          AI, and specifically LLMs, do not accelerate scientific research. This is hype. Nor is it desirable to do so, regardless of the method used. "Slow science" is something worth looking into if you haven't before.

                          helping analyze complex datasets
                          Hallucinations harm data analysis.

                          supporting education.
                          Available evidence strongly suggests use of digital technology in the classroom has significantly harmed education (e.g. Jared Horvath's testimony to the US Congress and his other work details these harms, which are serious and widespread). Available evidence also suggests that LLM technology in particular is having an even worse effect on education and learning.

                          I believe that you are arguing with someone else, not me, and I find the direction you're going with your own arguments to be disturbing, so I am ending this interaction here.
                          1 Reply Last reply
                          1
                          0
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups