Machine Translation Vs. Human Translation: How far we’ve come!

In high school I took a Spanish class that required me to read news articles once a week and construct intelligent, two paragraph responses to be neatly organized in a notebook to be turned in for a major grade—equivalent to a test—at the end of each quarter of the year.

And while it was a noble, well-intentioned attempt at getting us high school kiddies to get some Spanish writing practice in on a regular basis, I admit that even I, despite being a paragon of excellence back in high school (or so they thought), could not resist the urge to rely on my friend Mr. Google Translate to aid in some of the writing process.

In other words: in went the English (avoiding any tricky idioms), out came the Spanish. I tweaked a few obvious errors here and there, but really the hardest part of the writing process was simply copying it all down by hand into the notebook. There, I’ve said it! I’ve confessed my heinous crimes! I was nothing but a fraud, a cheat, a third-rate Spanish swindler if you must!

But when it came time to turn in my notebook for its quarterly grading I proudly handed over my notebook chock-full of machine translated Spanish. Surely I would pay the price when the notebook would be returned, right? My teacher would clearly see how intelligible, how obviously machine-translated my words were, and would mark me down accordingly. Right?

Wrong. I got my notebook back the next week. Grade: A.

Now, you might take this example and say my Spanish teacher was simply inept. That she didn’t read the entries very carefully, valuing amount of words of any sort of accuracy, and perhaps that’s true. Perhaps had she been a native Spanish speaker (she wasn’t) she would have been able to more easily notice sentences that were grammatically correct but that used unnatural language. But perhaps she simply saw the machine translation and actually thought, “wow, this may have a few errors but it’s still very good” (for high school, at least). Has machine translation really come that far?

* * *

I stumbled across a paper today titled Can Computer Translation Replace Human Translation? by Karen Schairer, PhD, assistant professor at Northern Arizona University.   The paper looks at Spanish translation—I unfortunately couldn’t find anything equivalent that used Japanese, so this will have to do.

In terms of internet years, the paper is ancient—it was published in 1996—but I was intrigued by the question posed in its title and decided to take a quick gander at its results and conclusion.

In the paper you’ve got three pieces of English-to-Spanish translation software, all with pretty laughable 90s-sounding names like Spanish Amigo, Spanish Assistant, and Spanish Scholar. They also come with pretty laughable marketing descriptions:

Inability to read and write a foreign language doesn’t mean you can’t get a piece of the international trade bonanza. Thanks to our automated language translation programs, you can do business in English, French, Spanish, German, and Italian—without having to learn the language

If I didn’t know it, I would swear I was reading satire. They actually used the word bonanza.

And in the paper the translation results of these three programs are tested against a human translator’s translation from English-to-Spanish. So in total you have four Spanish sentences being generated for each English sentence being tested—3 machine translated ones, and one human-translated sentence. Each of these 4 sentences are then rated by native Spanish speakers (both with Spanish as a dominant language, and Spanish as non-dominant) for translation accuracy on a 5-point scale.

So how’d the 1996 software do?

The first example they give translations of is the English sentence, “During the past year, do you believe the level of crime in your neighborhood has increased, decreased, or remained about the same?”

Human Translation: “Durante el año pasado, ¿cree Ud. que el nivel o índice de criminalidad en su vecindad ha aumentado, ha diminuido, o ha quedado al mismo nivel?” (accuracy score: 5/5)

Spanish Amigo: “En el año pasado lo hace toca ese crimen en su barrio ha aumentado, menguante, o quedó acerca del mismo cuando estaba antes de.” (accuracy score: 1.9/5)

Dr. Schairer goes on in the paper to laugh at how bad Spanish Amigo’s translation turned out, and that it wasn’t even worth printing the other programs’ translations as it would be a complete waste of everyone’s time (I’m paraphrasing here). Even if you don’t know Spanish, the accuracy score will tell you the relative shittiness of the Spanish translation.

Let’s look at another example:

English: “All your answers will be confidential”

Human Translation: “Todas sus respuestas serán confidenciales.”  (accuracy score: 5/5)

Spanish Assistant: “Todo (lo) que (usted) contesta será confidencial.” (accuracy score: 3.3)

Spanish Amigo: “Todo (lo que) usted contesta serán confidencial.” (accuracy score: 2.1)

Spanish Scholar: “Todo (lo que) tú contestas serás confidencial.” (accuracy score: 1.8/5)

Again, Dr. Schairer lays the poor 90s translation software a proper smack down: “[The English sentence] was so short and uncomplicated, determining which changes to make in post-editing took as long as simply translating the entire sentence directly from English to Spanish.”

Dayum, that is some serious machine-translation insult going on right there. I can tell she’s not a big fan of machine translation.

The author summarizes her findings and then concludes:

All sentences translated by the three computer programs required post-editing. Not one of the 69 translations received above 3.9 for accuracy. The human translator estimated that writing original translations from the English was as fast or faster than post-editing in all but six cases. In many cases, the translator had to refer to the English to determine what the Spanish should say. […]

Current technology as represented by [these programs] cannot yet replace qualified human translators. The challenge presented by the seemingly unpredictable nature of human languages is still best overcome by human beings.

And there you have it. Or at least, or there you had it, 16 years ago. Because as soon as I finished the paper I knew I had to retest those sentences. The author used some crappy BS Spanish translation software back in 1996. What would happen if she were to repeat the study using sleek and refined Google Translate? Who would be laughing at crappy language software from the 90s then?

Let’s look at the first sentence again, its human translation, and then Google’s whack at it. Unfortunately, I am not qualified to hand out accuracy scores for Google’s version.

English: During the past year, do you believe the level of crime in your neighborhood has increased, decreased, or remained about the same?

Human Translation: “Durante el año pasado, ¿cree Ud. que el nivel o índice de criminalidad en su vecindad ha aumentado, ha diminuido, o ha quedado al mismo nivel?”

Google Translation: “Durante el año pasado, ¿cree que el nivel de delincuencia en su barrio ha aumentado, disminuido o permanecido igual?”

Not too shabby! Sure, it’s not a perfect match compared to the human translation, but compared to the intelligible rubbish Spanish Amigo managed to shit out a decade-and-a-half ago, it’s damn close. The structure of the sentence basically mirrors the human version, with only a few words perhaps not as natural as they could have been.

Let’s move on to the “short and uncomplicated” sentence:

English: “All your answers will be confidential”

Human Translation: “Todas sus respuestas serán confidenciales.”

Google Translation: “Todas sus respuestas serán confidenciales.” (accuracy score: 5/5, bitches)

Oh snap son, would you look at that. I may not be qualified to gauge the accuracy of Spanish sentences, but I’m pretty sure that’s the same damn thing. Sorry translator profession people, the days of simply being bilingual will no longer be fetching you any cushy translation jobs.

Now of course, there are two things to remember here. First, these are pretty cut-and-dry sentences that don’t require much interpretation, and there’s pretty much no slang or any idiomatic expressions present at all. Second, Spanish and English are incredibly close languages, and I’m sure the Google database has a nice big fat amount of information to work off of between those two languages, compared to say, Thai.

I just thought it was interesting to see how far machine translation has come, since it’s still painted as pretty useless for most things other than just barely gisting a meaning out of some text. And perhaps we should give more credit to my poor Spanish teacher, because when Google translate starts spitting out accurate translations that mirror what a real translator would write, determining what was real, and what wasn’t, gets harder and harder.

Will machine translation replace humans? Not in this lifetime. But after 16 years, it sure did get a lot better.

Advertisements

1 Comment

Filed under misc

One response to “Machine Translation Vs. Human Translation: How far we’ve come!

  1. I can’t see machines taking over the jobs of human translators in the near future, as they have done with so many other professions (remember telephone operators?)
    These machine translators are ok when all u need is a quick understanding of a some rather simple text, but if you are running a business, or otherwise depend on accuracy of a translation, using professional translation services is the only way to go.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s