Yes, I hate this buzzword as much as you do, at least as it鈥檚 used in the present political climate. But it did capture your attention, and, like it or not, there actually is some meaning associated with the concept of 鈥渇ake news鈥 in a more traditional sense.
I believe we鈥檙e dealing with several 鈥渇ake news鈥 items when it comes to translation, especially translation technology. I would like to talk about two of these items. The first is something I鈥檝e discussed before at length, though my explanation must have been less than effective since it still dominates the thinking of many. The second item is something we might all be guilty of in some way.
Misconception #1: Working with Machine Translation Is the Same as Post-Editing
The first conceptual misunderstanding is that working with machine translation (MT) is essentially the same as post-editing translation. Most of us translators know this is not true, but not because we were told so or taught that way. It鈥檚 because we know that MT really is only one of many resources (alongside translation memories, termbases, corpora, dictionaries, and other online and offline resources) that can be used in the translation processes. We also know that most translation environment tools allow us to dynamically use (or not use) the content that comes from MT engines. Our proven experience stands in sharp contrast to the idea that post-editing (i.e., the correction of raw MT content) is the only way to use that technology.
Of course, we could say, 鈥渨ell, let others believe what they want to believe and let me do what I know is best for my business,鈥 but I think there鈥檚 a problem with that kind of thinking. I鈥檝e noticed how very difficult it is to talk about MT with anyone outside those who have some practical experience with it. That includes MT researchers and developers and, maybe more importantly, clients of ours who (are trying to) use MT. Typically, these individuals share the assumption that MT can be used by the translator only in the reactive way: the translator reacting to suggestions coming from the MT engine (i.e., post-editing). If that鈥檚 the assumption, then the projects offered to translators will be structured so only that kind of work with MT is possible, and the research and development into working with MT will look only into that avenue.
And this is not because of evil intent. Wordsmiths like us understand the power of words and language. If I have a concept in mind (such as how to work with MT), and the only language I have to apply to it is that of post-editing, it鈥檚 just very, very hard to change that. This is why we have to be patient, insistent, and strong in our communication that while there is this one way of working with MT output (in some cases, productively), in more cases than not there are other and better ways to work with that technology. Only then will we be sent a different kind of project and the research will look more deeply into other kinds of approaches.
Misconception #2: AI Emulates Functions of the Human Brain
This brings us to another topic, one where we ourselves might be helping to communicate something erroneous with unfortunate consequences. I鈥檓 talking about artificial intelligence (AI). There has been a lot of writing in this column and elsewhere about AI and its effects on the world of translation. Not only via neural MT, but as we discussed a few months ago, on a whole host of other kinds of technology that have an impact on the translation and translation management processes.
Clearly, we need to talk about and understand AI. Not like an AI researcher or developer would, but so we can have a healthy estimation of how much it supports our work now and in the future. But we鈥檝e been led astray on a path littered with our own words and our own imagination. Terms like 鈥neural MT,鈥 鈥渁rtificial intelligence,鈥 and 鈥渄eep learning鈥 all seem to suggest that these are processes that emulate functions of the human brain. And this is exactly what pop culture and news outlets also want us to believe.
The fact? It isn鈥檛 true. How do I know? Because we don鈥檛 understand our brains. We don鈥檛 know how memories are stored. We don鈥檛 know why some parts of the brain are responsible for some functions but can also be completely reconfigured. We don鈥檛 even know whether brain activity is actually a matter of computation or a completely different kind of process. We don鈥檛 know what causes moods, creativity, intelligence, wit, and emotions. And we certainly don鈥檛 know what 鈥渕ind鈥 and 鈥渃onsciousness鈥 are. We do know some impressive numbers (100 billion neurons, 100 trillion synapses, etc.), and lots of people are working very hard and making good progress on understanding more and more about the human (or really any) brain. But we鈥檙e still very far from having a good grasp on this most elusive of realms.
So, is there no artificial intelligence? Well, yes, there is, but it鈥檚 just that it doesn鈥檛 work like the human brain. In fact, the term 鈥渁rtificial intelligence鈥 is incomplete. We should always refer to its full and technically correct moniker, which is 鈥渘arrow AI.鈥 (That already sounds a lot better, doesn鈥檛 it?)
Narrow AI is the ability of a machine to non-concurrently process large amounts of data and make predictions exclusively on the basis of that data. That鈥檚 what we have today, and computers are incredibly good at it. Much better than we are.
General AI (also referred to as 鈥淎rtificial General Intelligence,鈥 or AGI), on the other hand, may never actually be achieved. We don鈥檛 even know whether AGI will be built on the basis of narrow AI鈥檚 current technology. If we ever reach true AGI, machines will be able to reason, use strategy, make judgments, learn, communicate in natural language, and integrate all of this toward common goals. (And, yes, also likely do a good job with translation and pretty much everything else.)
A few weeks ago I did a presentation for a class taught by a super-smart developer who also works for a large technology developer. I explained the differences between narrow AI and AGI, emphasizing as I did here that we don鈥檛 understand how our brain works and that it isn鈥檛 a model for our current state of AI. At the end of my talk a number of questions were raised, to which my developer acquaintance responded by explaining that our current form of AI is modeled on the human brain. This was exactly the opposite of what I had just said, though I think he didn鈥檛 realize it. If we鈥檝e been taught a certain concept over and over and over again, it鈥檚 not a matter of hearing the opposite once and being able to replace it easily. It takes a lot of patience and time.
Keep Working to Change Perceptions
Let鈥檚 teach ourselves and others that today鈥檚 artificial intelligence doesn鈥檛 emulate the human brain (and it鈥檚 entirely possible that it will never be able to do so). Let鈥檚 keep on repeating to the rest of the world that there are many ways to use MT, sometimes better than those that are assumed by default. We might just be able to turn that 鈥渇ake news鈥 into real and helpful news.
Further Reading
Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the World (The MIT Press, 2018), .
Reese, Byron. The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity (Atria Books, 2018), .
Jost Zetzsche is chair of ATA鈥檚 Translation and Interpreting Resources Committee. He is the author of Translation Matters, a collection of 81 essays about translators and translation technology. Contact: jzetzsche@internationalwriters.com.
This column has two goals: to inform the community about technological advances and encourage the use and appreciation of technology among translation professionals.