The Prophesy
Almost half the human professions, as we know them today, will have disappeared by 2050 as a result of task automation in developed economies. This Keynesian prediction, quantified by Carl Frey and Michael Osborne’s famous study on the jobs most susceptible to computerisation and corroborated by other economists, paints an uncertain future for the labour market, with a high rate of unemployment and an increasingly prominent role played by robots and computer systems.
The effects of computer automation on employment are not a new phenomenon but the logical consequence of a progressive mechanisation that began in the Industrial Revolution with the aim of cutting labour costs. We have been gradually accepting that all routine manual tasks, and even some that are not quite so routine, will eventually be automated because they can be easily cut down to a set of rules. What is so new is that the majority of routine cognitive tasks and many non-routine cognitive tasks can also be done by machine.
How is this possible? How can you automate something that is not reducible to a set of rules? From a computing point of view the answer is, experts claim, simple and lies in the existence of huge collections of complex digitalised data, most of which comes from the information traffic circulating round the internet. When this immense flow of information is processed, analysed and classified, computing systems “discover” all the sophistry behind our most complex acts, their combining tendencies, the “routines” that go unnoticed by the human eye, enabling them to “learn” to carry out complex tasks.
Does this mean that everything can be boiled down to routine? Is every cognitive action susceptible to computer automation? Although it may seem that computer engineering aspires to achieve that ultimate aim one day, for the time being it is coming up against some fairly major obstacles, the so-called “bottlenecks” of automation: perception in unstructured environments, the handling of irregular objects, creative intelligence, social intelligence and so on. In its headlong rush towards total automation, computerisation once again encounters its nightmare: thought! How are we to formalise and automate the freest thought processes, if even neurobiologists are unable to define them or pinpoint them in the brain?
A paradigmatic example and one of the most ambitious goals of computerised processing of large quantities of data is machine translation. In view of the latest publications on this discipline arising from computational linguistics, it is taken for granted that computers will end up translating in an intelligible way that is acceptable for the level demanded by the majority of mortals. Experts think that with some light-handed human intervention in the final phase, the result could be comparable to that of a traditional translation. Along with the expression of astonishment and the natural fear that this scenario prompts among those of us who make a living from the writings of Sartre and other authors, the internet is awash with exabytes of resigned conclusions on this issue, interspersed with advice on how to deal with the situation. And thus we are urged to lose our fear. Translators, stop overreacting! Less translation and more “post-editing”, this is the future of your profession, predict the standard-bearers of the new machine translation era. Get used to seeing machine translation systems as your friends, the same as your beloved computer assisted translation programs. You can’t imagine life without them now can you? Well, machine translation wants to make your life easier too. It can increase your productivity by up to 50%. And it’s here to stay. The corollary of this triumphalist approach is this: dear business owners, don’t be chumps by contracting human translators. Invest in good machine translating systems and get yourself a native speaker who can iron out the minor slip-ups made by this valuable software. The savings will be immense. You won’t regret it!
This same story has become a new religion. Putting an end to the myth of Babel in the globalised world was an old dream of economic and political power. Ever since the Second World War, substantial resources have been invested in developing this technology. Back in the sixties the appearance of an effective machine translation system was deemed to be imminent. After several decades of optimistic testing and frustrating results, we have entered the 21st century with the absolute conviction that we shall soon be living in a world of machine translation.
Is that day so very close? Is it possible that a machine can achieve the same result as a human translator (or similar) without even imitating the complex cognitive processes involved in this activity? Can a software program translate a text accurately, coherently and elegantly after having processed zettabytes of aligned pairs of sentences, translated (not always well) by humans (not always professional translators), according to a model based on the premise that this vast corpus will cover all the structures, all the ideas, all the diversity and the endless potential of language? Allow me at least to doubt that…
That is, unless us humans get used to speaking and writing like machines, and then anything goes, a hypothesis which, sadly, I think is more likely, for the reasons I shall try to explain in my second post reflecting on this issue.
© Marta Pino Moreno.