Our life is frittered away by detail… simplify, simplify
Suppose the unstoppable regression of our culture, the increasing dumbing down of humanity, was due to a deliberate plot to fritter away, among other shared heritage, our linguistic wealth? We can’t ignore the fact that, as has been happening in other fields since the start of economic globalization, the interests of a few computer engineering giants will be hugely benefited if human communication is simplified.
Greater linguistic impoverishment will make it much easier to successfully implement systems that can machine translate most of the messages. Quantitatively and qualitatively simplifying linguistic structure and content would result in a dual benefit: (a) the sequences to be translated would become increasingly simpler, more repetitive, controllable and easily translatable; (b) the decoders of those translated sequences, that is, human readers or listeners of those messages would end up lowering their expectations of linguistic quality to a minimum, becoming completely flexible with their thresholds of grammaticality and naturalness, similar to what is already happening, for example, with the use of English as lingua franca. Creative intelligence and thought would no longer stand in the way of computerisation, because they would barely intervene in these routine linguistic acts. As a result, as translation would no longer be a complex cognitive act, translation could be done by artificial intelligence.
This dystopian hypothesis is much more disturbing than the actual growth of machine translation systems, as it takes things much further and paints a picture of cultural desertification. Maybe we are taking things too far, but a there are number of clues pointing to that eventual outcome, as we shall now see.
The cognitive process of human translation
Human translation, also known lately as “traditional translation” (they’ll be calling it “artisan” any day now), unlike modern machine translation, is as heterogeneous as writing styles. The translation of a 20th century philosophical text and that of a 21st century smartphone manual are poles apart. The complexity of the cognitive process of translation is directly proportional to the complexity of the text being translated.
When deciding how to approach a text, the human translator has to cognitively process it on the following levels at least:
- semantic
- syntactic
- pragmatic and discursive
- phonic
- stylistic
The more elaborate the original text is, the more demanding the translator’s work will be. Although all levels should be analysed when translating any kind of text, sometimes the semantic and terminological elements acquire a preeminent status in detriment to more formal factors (syntactic, pragmatic, sound and stylistic), either because the original neglects this aspect or because the end receiver of the text regards it as superfluous.
Very often, for example, content published on the internet adheres to rules of simplistic clarity, with a complete disdain for anything to do with euphony, lexical density, discursive cohesion, syntactic richness, etc. Some manuals advise writing for the internet in brief sentences, short paragraphs, basic syntax, a large dose of visual clarity and deliberate lexical repetitions, all of which favour the aim of achieving better positioning on search engines. The most important thing is that Google likes it. Precisely Google, the same company that in recent years has launched its ambitious machine translation program based on neural networks, the latest great innovation in this field, Google Neural Machine Translation (GNMT), set to replace the current paradigm of Statistical Machine Translation (SMT). Coincidence?
Computer assisted translation
One of the great advances towards the mass automation of translation came with computer assisted translation tools, the so-called “CAT tools”. With them, we translators learned to store translated sequences in a database along with their original versions so that information could be reused in the future. And on top of the fact that we paid for the software out of our own pockets, we lost out financially because many companies demanded discounts for repetitions.
Another consequence of the widespread use of computer assisted translation was that texts were reduced down to a series of disjointed sentences. The software engineers who invented these systems forgot about paragraphs, texts, context and discursive coherence. Meanwhile, many human translators stopped making use of procedures like anaphora, cataphora, ellipsis and deixis. In the most extreme cases, they ended up renouncing style altogether and getting dangerously close to the concept of machine translation. The underlying logic is the following: if the original text is repetitive and my client gives me a discount of so much per repetition, it’s not in my interests to change what’s repeated.
The short-term mentality of human translators who translate like machines has made the situation worse, as CAT tool developers have lost interest in improving the translator’s work environment and have decided to incorporate machine translation modules into CAT software, with two mutually complementary objectives: (a) getting human translators used to handling machine translation in the methodological and cognitive translation process; and (b) feeding machine translation systems with the fruits of human translation: translation memories.
What a great idea! The strategy is perfect. Us human translators are funding and feeding machine translation data banks while watching our activity becoming more and more precarious.
In the third and final post I shall try to describe how I imagine the future for professional translators and their various specialities in light of the changes looming on the horizon.
© Marta Pino Moreno.