AI can’t even give a decent translation …

August 24, 2023 in Columnists, News by RBN Staff

 

The Shallowness of Google Translate

The program uses state-of-the-art AI techniques, but simple tests show that it’s a long way from real understanding.

By Douglas Hofstadter

Hands hold a smartphone in front of a sign saying "Bienvenue," and the smartphone reads "Welcome."

 

One Sunday, at one of our weekly salsa sessions, my friend Frank brought along a Danish guest. I knew Frank spoke Danish well, because his mother was Danish, and he had lived in Denmark as a child. As for his friend, her English was fluent, as is standard for Scandinavians. However, to my surprise, during the evening’s chitchat it emerged that the two friends habitually exchanged emails using Google Translate. Frank would write a message in English, then run it through Google Translate to produce a new text in Danish; conversely, she would write a message in Danish, then let Google Translate anglicize it. How odd! Why would two intelligent people, each of whom spoke the other’s language well, do this? My own experiences with machine-translation software had always led me to be highly skeptical of it. But my skepticism was clearly not shared by these two. Indeed, many thoughtful people are quite enamored of translation programs, finding little to criticize in them. This baffles me.

ENJOY A YEAR OF UNLIMITED ACCESS TO THE ATLANTIC—INCLUDING EVERY STORY ON OUR SITE AND APP, SUBSCRIBER NEWSLETTERS, AND MORE.

Become a Subscriber

As a language lover and an impassioned translator, as a cognitive scientist and a lifelong admirer of the human mind’s subtlety, I have followed the attempts to mechanize translation for decades. When I first got interested in the subject, in the mid-1970s, I ran across a letter written in 1947 by the mathematician Warren Weaver, an early machine-translation advocate, to Norbert Wiener, a key figure in cybernetics, in which Weaver made this curious claim, today quite famous:

When I look at an article in Russian, I say, “This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode.”

Some years later he offered a different viewpoint: “No reasonable person thinks that a machine translation can ever achieve elegance and style. Pushkin need not shudder.” Whew! Having devoted one unforgettably intense year of my life to translating Alexander Pushkin’s sparkling novel in verse, Eugene Onegin, into my native tongue (that is, having radically reworked that great Russian work into an English-language novel in verse), I find this remark of Weaver’s far more congenial than his earlier remark, which reveals a strangely simplistic view of language. Nonetheless, his 1947 view of translation as decoding became a credo that has long driven the field of machine translation.

DON’T MISS WHAT MATTERS. SIGN UP FOR THE ATLANTIC DAILY NEWSLETTER.

Since those days, “translation engines” have gradually improved, and recently the use of so-called deep neural nets has even suggested to some observers (see “The Great A.I. Awakening” by Gideon Lewis-Kraus in The New York Times Magazine, and “Machine Translation: Beyond Babel” by Lane Greene in The Economist) that human translators may be an endangered species. In this scenario, human translators would become, within a few years, mere quality controllers and glitch fixers rather than producers of fresh new text.

Such a development would cause a soul-shattering upheaval in my mental life. Although I fully understand the fascination of trying to get machines to translate well, I am not in the least eager to see human translators replaced by inanimate machines. Indeed, the idea frightens and revolts me. To my mind, translation is an incredibly subtle art that draws constantly on one’s many years of life experience, and on one’s creative imagination. If, some “fine” day, human translators were to become relics of the past, my respect for the human mind would be profoundly shaken, and the shock would leave me reeling with terrible confusion and immense, permanent sadness.

Each time I read an article claiming that the guild of human translators will soon be forced to bow down before the terrible, swift sword of some new technology, I feel the need to check the claims out myself, partly out of a sense of terror that this nightmare just might be around the corner, more hopefully out of a desire to reassure myself that it’s not just around the corner, and finally, out of my long-standing belief that it’s important to combat exaggerated claims about artificial intelligence. And so after reading about how the old idea of artificial neural networks, recently adopted by a branch of Google called Google Brain and now enhanced by “deep learning,” has resulted in a new kind of software that has allegedly revolutionized machine translation, I decided I had to check out the latest incarnation of Google Translate. Was it a game changer, as Deep Blue and AlphaGo were for the venerable games of chess and Go?

MAKE YOUR INBOX MORE INTERESTING WITH NEWSLETTERS FROM YOUR FAVORITE ATLANTIC WRITERS.

Browse Newsletters

I learned that although the older version of Google Translate can handle a very large repertoire of languages, its new deep-learning incarnation at the time worked for just nine languages. (It’s now expanded to 96.)* Accordingly, I limited my explorations to English, French, German, and Chinese.

Before showing my findings, though, I should point out that an ambiguity in the adjective deep is being exploited here. When one hears that Google bought a company called DeepMind whose products have “deep neural networks” enhanced by “deep learning,” one cannot help taking the word deep to mean “profound,” and thus “powerful,” “insightful,” “wise.” And yet, the meaning of deep in this context comes simply from the fact that these neural networks have more layers (12, say) than older networks, which might have only two or three. But does that sort of depth imply that whatever such a network does must be profound? Hardly. This is verbal spinmeistery.

I am very wary of Google Translate, especially given all the hype surrounding it. But despite my distaste, I recognize some astonishing facts about this bête noire of mine. It is accessible for free to anyone on Earth, and will convert text in any of roughly 100 languages into text in any of the others. That is humbling. If I am proud to call myself “pi-lingual” (meaning the sum of all my fractional languages is a bit more than 3, which is my lighthearted way of answering the question “How many languages do you speak?”), then how much prouder should Google Translate be, as it could call itself “bai-lingual” (bai being Mandarin for “100”). To a mere pi-lingual, bai-lingualism is most impressive. Moreover, if I copy and paste a page of text in Language A into Google Translate, only moments will elapse before I get back a page filled with words in Language B. And this is happening all the time on screens all over the planet, in dozens of languages.

The practical utility of Google Translate and similar technologies is undeniable, and probably a good thing overall, but there is still something deeply lacking in the approach, which is conveyed by a single word: understanding. Machine translation has never focused on understanding language. Instead, the field has always tried to “decode”—to get away with not worrying about what understanding and meaning are. Could it in fact be that understanding isn’t needed in order to translate well? Could an entity, human or machine, do high-quality translation without paying attention to what language is all about? To shed some light on this question, I turn now to the experiments I did.

I began my explorations very humbly, using the following short remark, which, in a human mind, evokes a clear scenario:

In their house, everything comes in pairs. There’s his car and her car, his towels and her towels, and his library and hers.

The translation challenge seems straightforward, but in French (and other Romance languages), the words for “his” and “her” don’t agree in gender with the possessor, but with the item possessed. So here’s what Google Translate gave me:

Dans leur maison, tout vient en paires. Il y a sa voiture et sa voiture, ses serviettes et ses serviettes, sa bibliothèque et les siennes.

The program fell into my trap, not realizing, as any human reader would, that I was describing a couple, stressing that for each item he had, she had a similar one. For example, the deep-learning engine used the word sa for both “his car” and “her car,” so you can’t tell anything about either car owner’s gender. Likewise, it used the genderless plural ses both for “his towels” and “her towels,” and in the last case of the two libraries, his and hers, it got thrown by the final s in “hers” and somehow decided that that s represented a plural (“les siennes”). Google Translate’s French sentence missed the whole point.

Next I translated the challenge phrase into French myself, in a way that did preserve the intended meaning. Here’s my French version:

Chez eux, ils ont tout en double. Il y a sa voiture à elle et sa voiture à lui, ses serviettes à elle et ses serviettes à lui, sa bibliothèque à elle et sa bibliothèque à lui.

The phrase “sa voiture à elle” spells out the idea of “her car,” and similarly, “sa voiture à lui” can only be heard as meaning “his car.” At this point, I figured it would be trivial for Google Translate to carry my French translation back into English and get the English right on the money, but I was dead wrong. Here’s what it gave me:

At home, they have everything in double. There is his own car and his own car, his own towels and his own towels, his own library and his own library.

What?! Even with the input sentence screaming out the owners’ genders as loudly as possible, the translating machine ignored the screams and made everything masculine. Why did it throw the sentence’s most crucial information away?

We humans know all sorts of things about couples, houses, personal possessions, pride, rivalry, jealousy, privacy, and many other intangibles that lead to such quirks as a married couple having towels embroidered his and hers. Google Translate isn’t familiar with such situations. Google Translate isn’t familiar with situations, period. It’s familiar solely with strings composed of words composed of letters. It’s all about ultra-rapid processing of pieces of text, not about thinking or imagining or remembering or understanding. It doesn’t even know that words stand for things. Let me hasten to say that a computer program certainly could, in principle, know what language is for, and could have ideas and memories and experiences, and could put them to use, but that’s not what Google Translate was designed to do. Such an ambition wasn’t even on its designers’ radar screens.

Well, I chuckled at these poor shows, relieved to see that we aren’t, after all, so close to replacing human translators by automata. But I still felt I should check the engine out more closely. After all, one swallow does not thirst quench.

Indeed, what about this freshly coined phrase, “One swallow does not thirst quench” (alluding, of course, to, “One swallow does not a summer make”)? I couldn’t resist trying it out; here’s what Google Translate flipped back at me: “Une hirondelle n’aspire pas la soif.” This is a grammatical French sentence, but it’s pretty hard to fathom. First it names a certain bird (une hirondelle—“a swallow”), then it says this bird is “not inhaling” or “not sucking” (“n’aspire pas”), and finally it reveals that the neither-inhaled-nor-sucked item is thirst (“la soif”). Clearly Google Translate didn’t catch my meaning; it merely came out with a heap of bull. “Il sortait simplement avec un tas de taureau.” “He just went out with a pile of bulls.” “Il vient de sortir avec un tas de taureaux.” Please pardon my French—or rather, Google Translate’s pseudo-French.

From the frying pan of French, let’s jump into the fire of German. Of late I’ve been engrossed in the book Sie nannten sich der Wiener Kreis (“They Called Themselves the Vienna Circle”), by the Austrian mathematician Karl Sigmund. It describes a group of idealistic Viennese intellectuals in the 1920s and ’30s who had a major impact on philosophy and science during the rest of the century. I chose a short passage from Sigmund’s book and gave it to Google Translate. Here it is, first in German, followed by my own translation, and then Google Translate’s version. (By the way, I checked my translation with two native speakers of German, including Karl Sigmund, so I think you can assume it is accurate.)

Sigmund:

Nach dem verlorenen Krieg sahen es viele deutschnationale Professoren, inzwischen die Mehrheit in der Fakultät, gewissermaßen als ihre Pflicht an, die Hochschulen vor den “Ungeraden” zu bewahren; am schutzlosesten waren junge Wissenschaftler vor ihrer Habilitation. Und Wissenschaftlerinnen kamen sowieso nicht in frage; über wenig war man sich einiger.

Hofstadter:

After the defeat, many professors with Pan-Germanistic leanings, who by that time constituted the majority of the faculty, considered it pretty much their duty to protect the institutions of higher learning from “undesirables.” The most likely to be dismissed were young scholars who had not yet earned the right to teach university classes. As for female scholars, well, they had no place in the system at all; nothing was clearer than that.

Google Translate:

After the lost war, many German-National professors, meanwhile the majority in the faculty, saw themselves as their duty to keep the universities from the “odd”; Young scientists were most vulnerable before their habilitation. And scientists did not question anyway; There were few of them.

The words in Google Translate’s output are all English words (even if, for unclear reasons, a couple are inappropriately capitalized). So far, so good! But soon it grows wobbly, and the further down you go, the wobblier it gets.

I’ll focus first on “the ‘odd.’” This corresponds to the German die Ungeraden,” which here means “politically undesirable people.” Google Translate, however, had a reason—a very simple statistical reason—for choosing the word odd. Namely, in its huge bilingual database, the word ungerade was almost always translated as “odd.” Although the engine didn’t realize why this was the case, I can tell you why. It’s because ungerade—which literally means “un-straight” or “uneven”—is nearly always defined as “not divisible by two.” By contrast, my choice of “undesirables” to render Ungeraden had nothing to do with the statistics of words, but came from my understanding of the situation—from my zeroing in on a notion not explicitly mentioned in the text and certainly not listed as a translation of ungerade in any of my German dictionaries.

Let’s move on to the German Habilitation, denoting a university status resembling tenure. The English cognate word habilitation exists but it is super rare, and certainly doesn’t bring to mind tenure or anything like it. That’s why I briefly explained the idea rather than just quoting the obscure word, because that mechanical gesture would not get anything across to anglophonic readers. Of course Google Translate would never do anything like this, because it has no model of its readers’ knowledge.

The last two sentences really bring out how crucial understanding is for translation. The 15-letter German noun Wissenschaftler means either “scientist” or “scholar.” (I opted for the latter, because in this context it was referring to intellectuals in general. Google Translate didn’t get that subtlety.) The related 17-letter noun Wissenschaftlerin, found in the closing sentence in its plural form Wissenschaftlerinnen, is a consequence of the gendered-ness of German nouns. Whereas the “short” noun is grammatically masculine and thus suggests a male scholar, the longer noun is feminine and applies to females only. I wrote “female scholar” to get the idea across. Google Translate, however, did not understand that the feminizing suffix “-in” was the central focus of attention in the final sentence. Because it didn’t realize that females were being singled out, the engine merely reused the word scientist, thus missing the sentence’s entire point. As in the earlier French case, Google Translate didn’t have the foggiest idea that the sole purpose of the German sentence was to shine a spotlight on a contrast between males and females.

Aside from that blunder, the rest of the final sentence is a disaster. Take its first half. Is “scientists did not question anyway” really a translation of “Wissenschaftlerinnen kamen sowieso nicht in frage”? It doesn’t mean what the original means—it’s not even in the same ballpark. It just consists of English words haphazardly triggered by the German words. Is that all it takes for a piece of output to deserve the label translation?

The sentence’s second half is equally erroneous. The last six German words mean, literally, “over little was one more united,” or, more flowingly, “There was little about which people were more in agreement,” yet Google Translate managed to turn that perfectly clear idea into “There were few of them.” We baffled humans might ask “Few of what?” but to the mechanical listener, such a question would be meaningless. Google Translate doesn’t have ideas behind the scenes, so it couldn’t even begin to answer the simple-seeming query. The translation engine was not imagining large or small amounts or numbers of things. It was just throwing symbols around, without any notion that they might symbolize something.

It’s hard for a human, with a lifetime of experience and understanding and of using words in a meaningful way, to realize how devoid of content all the words thrown onto the screen by Google Translate are. It’s almost irresistible for people to presume that a piece of software that deals so fluently with words must surely know what they mean. This classic illusion associated with artificial-intelligence programs is called the ELIZA effect, because one of the first programs to pull the wool over people’s eyes with its seeming understanding of English, back in the ’60s, was a vacuous phrase manipulator called ELIZA, which pretended to be a psychotherapist, and as such, gave many people who interacted with it the eerie sensation that it deeply understood their innermost feelings.

For decades, sophisticated people—even some artificial-intelligence researchers—have fallen for the ELIZA effect. To make sure that my readers steer clear of this trap, let me quote some phrases from a few paragraphs up—namely, “Google Translate did not understand,” “it did not realize,” and “Google Translate didn’t have the foggiest idea.” Paradoxically, these phrases, despite harping on the lack of understanding, almost suggest that Google Translate might at least sometimes be capable of understanding what a word or a phrase or a sentence means, or is about. But that isn’t the case. Google Translate is all about bypassing or circumventing the act of understanding language.

To me, the word translation exudes a mysterious and evocative aura. It denotes a profoundly human art form that graciously carries clear ideas in Language A into clear ideas in Language B, and the bridging act should not only maintain clarity but also give a sense for the flavor, quirks, and idiosyncrasies of the writing style of the original author. Whenever I translate, I first read the original text carefully and internalize the ideas as clearly as I can, letting them slosh back and forth in my mind. It’s not that the words of the original are sloshing back and forth; it’s the ideas that are triggering all sorts of related ideas, creating a rich halo of related scenarios in my mind. Needless to say, most of this halo is unconscious. Only when the halo has been evoked sufficiently in my mind do I start to try to express it—to “press it out”—in the second language. I try to say in Language B what strikes me as a natural B-ish way to talk about the kinds of situations that constitute the halo of meaning in question.

I am not, in short, moving straight from words and phrases in Language A to words and phrases in Language B. Instead, I am unconsciously conjuring up images, scenes, and ideas, dredging up experiences I myself have had (or have read about, or seen in movies, or heard from friends), and only when this nonverbal, imagistic, experiential, mental “halo” has been realized—only when the elusive bubble of meaning is floating in my brain—do I start the process of formulating words and phrases in the target language, and then revising, revising, and revising. This process, mediated via meaning, may sound sluggish, and indeed, in comparison with Google Translate’s two or three seconds a page, it certainly is—but it is what any serious human translator does. This is the kind of thing I imagine when I hear an evocative phrase like deep mind.

That said, I turn now to Chinese, a language that gave the deep-learning software a far rougher ride than the two European languages did. For my test material, I drew from the touching memoir Women Sa (“We Three”), written by the Chinese playwright and translator Yang Jiang, who recently died at 104. Her book recounts the intertwined lives of herself; her husband, Qian Zhongshu (also a novelist and translator), and their daughter. It is not written in an especially arcane manner, but it uses an educated, lively Chinese. I chose a short passage and let Google Translate loose on it. Here are the results, along with my own translation (again vetted by native speakers of Chinese):

Yang:

锺书到清华工作一年后,调任毛选翻译委员会的工作,住在城里,周末回校。 他仍兼管研究生。

毛选翻译委员会的领导是徐永煐同志。介绍锺书做这份工作的是清华同学乔冠华同志。

事定之日,晚饭后,有一位旧友特雇黄包车从城里赶来祝贺。客去后,锺书惶恐地对我说:

他以为我要做“南书房行走”了。这件事不是好做的,不求有功,但求无过。

Hofstadter:

After Zhongshu had worked at Tsinghua University for a year, he was transferred to the committee that was translating selected works of Chairman Mao. He lived in the city, but each weekend he would return to school. He also was still supervising his graduate students.

The leader of the translation committee of Mao’s works was Comrade Xu Yongying, and the person who had arranged for Zhongshu to do this work was his old Tsinghua schoolmate, Comrade Qiao Guanhua.

On the day this appointment was decided, after dinner, an old friend specially hired a rickshaw and came all the way from the city just to congratulate Zhongshu. After our guest had left, Zhongshu turned to me uneasily and said:

“He thought I was going to become a ‘South Study special aide.’ This kind of work is not easy. You can’t hope for glory; all you can hope for is to do it without errors.”

Google Translate:

After a year of work at Tsinghua, he was transferred to the Mao Translating Committee to live in the city and back to school on weekends. He is still a graduate student.

The leadership of the Mao Tse Translation Committee is Comrade Xu Yongjian. Introduction to the book to do this work is Tsinghua students Qiao Guanhua comrades.

On the day of the event, after dinner, an old friend hired a rickshaw from the city to congratulate. Guest to go, the book of fear in the book said to me:

He thought I had to do “South study walking.” This is not a good thing to do, not for meritorious service, but for nothing.

I’ll briefly point out a few oddities. First of all, Google Translate never refers to Zhongshu by name, although his name (“锺书”) occurs three times in the original. The first time, the engine uses the pronoun he; the second time around, it says “the book”; the third time, it says “the book of fear in the book.” Go figure!

A second oddity is that the first paragraph clearly says that Zhongshu is supervising graduate students, whereas Google Translate turns him into a graduate student.

A third oddity is that in the phrase Mao Tse Translation Committee, one-third of Chairman Mao Tse Tung’s name fell off the train.

A fourth oddity is that the name “Yongying” was replaced by “Yongjian.”

A fifth oddity is that “after our guest had left” was reduced to “guest to go.”

A sixth oddity is that the last sentence makes no sense at all.

Well, these six oddities are already quite a bit of humble pie for Google Translate to swallow, but let’s forgive and forget. Instead, I’ll focus on just one confusing phrase I ran into—a five-character phrase in quotation marks in the last paragraph (“南书房行走”). Character for character, it might be rendered as “south book room go walk,” but that jumble is clearly unacceptable, especially because the context requires it to be a noun. Google Translate invented “South study walking,” which is not helpful.

Now, I admit that the Chinese phrase was utterly opaque to me. Although literally it looked like it meant something about moving about on foot in a study on the south side of some building, I knew that couldn’t be right; it made no sense in the context. To translate it, I had to find out about something in Chinese culture that I was ignorant of. So where did I turn for help? To Google! (But not to Google Translate.) I typed in the Chinese characters, surrounded them with quote marks, then did a Google search for that exact literal string. Lickety-split, up came a bunch of webpages in Chinese, and then I painfully slogged my way through the opening paragraphs of the first couple of websites, trying to figure out what the phrase was all about.

I discovered the term dates back to the Qing dynasty (1644–1911) and refers to an intellectual assistant to the emperor, whose duty was to help the emperor (in the imperial palace’s south study) stylishly craft official statements. The two characters that seem to mean “go walk” actually form a chunk denoting an aide. And so, given that information supplied by Google Search, I came up with my phrase “South Study special aide.”

It’s too bad Google Translate couldn’t avail itself of the services of Google Search as I did, isn’t it? But then again, Google Translate can’t understand webpages, although it can translate them in the twinkling of an eye. Or can it? Below I exhibit the astounding piece of output text that Google Translate super swiftly spattered across my screen after being fed the opening of the website that I got my info from:

“South study walking” is not an official position, before the Qing era this is just a “messenger,” generally by the then imperial intellectuals Hanlin to serve as. South study in the Hanlin officials in the “select chencai only goods and excellent” into the value, called “South study walking.” Because of the close to the emperor, the emperor’s decision to have a certain influence. Yongzheng later set up “military aircraft,” the Minister of the military machine, full-time, although the study is still Hanlin into the value, but has no participation in government affairs. Scholars in the Qing Dynasty into the value of the South study proud. Many scholars and scholars in the early Qing Dynasty into the south through the study.

Is this actually in English? Of course we all agree that it’s made of English words (for the most part, anyway), but does that imply that it’s a passage in English? To my mind, because the above paragraph contains no meaning, it’s not in English; it’s just a jumble made of English ingredients—a random-word salad, an incoherent hodgepodge.

In case you’re curious, here’s my version of the same passage (it took me hours):

The nan-shufang-xingzou (“South Study special aide”) was not an official position, but in the early Qing dynasty it was a special role generally filled by whoever was the emperor’s current intellectual academician. The group of academicians who worked in the imperial palace’s south study would choose, among themselves, someone of great talent and good character to serve as ghostwriter for the emperor, and always to be at the emperor’s beck and call; that is why this role was called “South Study special aide.” The South Study aide, being so close to the emperor, was clearly in a position to influence the latter’s policy decisions. However, after Emperor Yongzheng established an official military ministry with a minister and various lower positions, the South Study aide, despite still being in the service of the emperor, no longer played a major role in governmental decision making. Nonetheless, Qing dynasty scholars were eager for the glory of working in the emperor’s south study, and during the early part of that dynasty, quite a few famous scholars served the emperor as South Study special aides.

Some readers may suspect that I, in order to bash Google Translate, cherry-picked passages on which it stumbled terribly, and that it actually does far better on the large majority of passages. Though that sounds plausible, it’s not the case. Nearly every paragraph I selected from books I’m currently reading gave rise to translation blunders of all shapes and sizes, including senseless and incomprehensible phrases, as above.

Of course I grant that Google Translate sometimes comes up with a series of output sentences that sound fine (although they may be misleading or utterly wrong). A whole paragraph or two may come out superbly, giving the illusion that Google Translate knows what it is doing, understands what it is “reading.” In such cases, Google Translate seems truly impressive—almost human! Praise is certainly due to its creators and their collective hard work. But at the same time, don’t forget what Google Translate did with these two Chinese passages, and with the earlier French and German passages. To understand such failures, one has to keep the ELIZA effect in mind. The bai-lingual engine isn’t reading anything—not in the normal human sense of the verb “to read.” It’s processing text. The symbols it’s processing are disconnected from experiences in the world. It has no memories on which to draw, no imagery, no understanding, no meaning residing behind the words it so rapidly flings around.

A friend asked me whether Google Translate’s level of skill isn’t merely a function of the program’s database. He figured that if you multiplied the database by a factor of, say, a million or a billion, eventually it would be able to translate anything thrown at it, and essentially perfectly. I don’t think so. Having ever more “big data” won’t bring you any closer to understanding, because understanding involves having ideas, and lack of ideas is the root of all the problems for machine translation today. So I would venture that bigger databases—even much bigger ones—won’t turn the trick.

Another natural question is whether Google Translate’s use of neural networks—a gesture toward imitating brains—is bringing us closer to genuine understanding of language by machines. This sounds plausible at first, but there’s still no attempt being made to go beyond the surface level of words and phrases. All sorts of statistical facts about the huge databases are embodied in the neural nets, but these statistics merely relate words to other words, not to ideas. There’s no attempt to create internal structures that could be thought of as ideas, images, memories, or experiences. Such mental etherealities are still far too elusive to deal with computationally, and so, as a substitute, fast and sophisticated statistical word-clustering algorithms are used. But the results of such techniques are no match for actually having ideas involved as one reads, understands, creates, modifies, and judges a piece of writing.

Despite my negativism, Google Translate offers a service many people value highly: It effects quick-and-dirty conversions of meaningful passages written in Language A into not necessarily meaningful strings of words in Language B. As long as the text in Language B is somewhat comprehensible, many people feel perfectly satisfied with the end product. If they can “get the basic idea” of a passage in a language they don’t know, they’re happy. This isn’t what I personally think the word translation means, but to some people it’s a great service, and to them it qualifies as translation. Well, I can see what they want, and I understand that they’re happy. Lucky them!

I’ve recently seen bar graphs made by technophiles that claim to represent the “quality” of translations done by humans and by computers, and these graphs depict the latest translation engines as being within striking distance of human-level translation. To me, however, such quantification of the unquantifiable reeks of pseudoscience, or, if you prefer, of nerds trying to mathematize things whose intangible, subtle, artistic nature eludes them. To my mind, Google Translate’s output today ranges all the way from excellent to grotesque, but I can’t quantify my feelings about it. Think of my first example involving “his” and “her” items. The idealess program got nearly all the words right, but despite that slight success, it totally missed the point. How, in such a case, should one “quantify” the quality of the job? The use of scientific-looking bar graphs to represent translation quality is simply an abuse of the external trappings of science.

Let me return to that sad image of human translators, soon outdone and outmoded, gradually turning into nothing but quality controllers and text tweakers. That’s a recipe for mediocrity at best. A serious artist doesn’t start with a kitschy piece of error-ridden bilgewater and then patch it up here and there to produce a work of high art. That’s not the nature of art. And translation is an art.

In my writings throughout the years, I’ve always maintained that the human brain is a machine—a very complicated kind of machine—and I’ve vigorously opposed those who say that machines are intrinsically incapable of dealing with meaning. There is even a school of philosophers who claim computers could never “have semantics” because they’re made of “the wrong stuff” (silicon). To me, that’s facile nonsense. I won’t touch that debate here, but I wouldn’t want to leave readers with the impression that I believe intelligence and understanding to be forever inaccessible to computers. If in this essay I seem to come across as sounding that way, it’s because the technology I’ve been discussing makes no attempt to reproduce human intelligence. Quite the contrary: It attempts to make an end run around human intelligence, and the output passages exhibited above clearly reveal its giant lacunas.

From my point of view, there is no fundamental reason that machines could not, in principle, someday think; be creative, funny, nostalgic, excited, frightened, ecstatic, resigned, hopeful, and, as a corollary, able to translate admirably between languages. There’s no fundamental reason that machines might not someday succeed smashingly in translating jokes, puns, screenplays, novels, poems, and, of course, essays like this one. But all that will come about only when machines are as filled with ideas, emotions, and experiences as human beings are. And that’s not around the corner. Indeed, I believe it is still extremely far away. At least that is what this lifelong admirer of the human mind’s profundity fervently hopes.

When, one day, a translation engine writes an artistic novel in verse in English, using precise rhyming iambic tetrameter rich in wit, pathos, and sonic verve, then I’ll know it’s time for me to tip my hat and bow out.