There is no doubt technology has changed our lives and careers. And the development of machine learning technologies has the capacity to continue improving how we live.
But could the development of machine learning (ML) come at the expense of human labour? The vast community of global translators can already feel digital breath on the back of their neck.
The concept of machines replacing human is nothing new. Advancements in industrial technology have been scything down workforces since the 18th Century.
Now the digital age is in full swing, computer technology has the potential to replace more roles reserved for qualified humans than anything that came before it.
Machine translation (MT) is a major talking point. Technology companies have been developing machine learning technologies in an effort to streamline translation services since the early 1990’s.
Technology companies of course, gloss over the inadequacies with marketing speak. In reality, MT is not much further forward now than it was 20 years ago.
Google recently announced its automated translation tool Google Translate now incorporates 103 languages. This covers 99% of the population that can interpret foreign languages online.
But whilst Google Translate makes translation free and accessible, users quickly discover the quality of translation is insufficient. This is particularly the case in business settings, and so working with a translation agency is still the only choice for almost everyone in the professional field.
Machine technology can substitute single words from one language to another, and most of the time it will make sense. But the complexities of interpreting languages encompass whole sentences in the context they are intended to be received.
Interpretation companies have adopted translation software in an effort to increase output. Such software does have its uses with formulaic translations, but has limitations in specialist niches and localisation.
There are also user issues. Translators have to switch systems to obtain files they need to work with and there is still a lot of room for typical human error that has not been addressed by technology companies.
In the main, the development of translation tools has become largely stagnated. Machine learning failed decades ago. The modern approach is to get humans to teach the machines.
The evolution of machine translation
Google Translate develops the use of languages within the internet community. They get real people – average internet users and professional linguists – to provide interpretations and verify existing translations.
In principle, the process seems sensible. Native speakers commenting on how they use language and interpret similar expressions into other languages has the potential to be accurate and cost-effective.
Yet there is a fundamental problem with this solution. Tim Adams writing for The Guardian explains:
“The translation, which approximates to the best “human” version of the sentence, looks like a triumph for what used to be called artificial intelligence and now is called, less ambitiously, machine learning. The computer can understand language, we are invited to think.”
But can machine learning really understand language or is it merely a smokescreen for corporations to understand how we use language?
The technicalities involved in programming a machine to learn languages was deemed impossible in the mid-20th Century and prompted technology companies to find an alternative solution.
Using the logistical powers of computers began during the cold war when American Intelligence attempted to decode the Russian language. The project failed miserably.
IBM made a breakthrough in the 90’s. The technology company compared text that had been translated on numerous occasions to work out the most probable meaning of words and contexts.
Google, Yahoo, Microsoft and Bing use this process to match websites with search terms. Last year, Google developed Rank Brain which bores deeper into the contextual meaning of search terms.
Machine learning (ML) works to a degree. But technology companies have still not perfected translations in a way that businesses can communicate effectively and professionally to an international audience. Linguists have.
When IBM technician, Frederick Jelinek pioneered the new style of machine translation he said “Whenever I fire a linguist, the performance of our system improves.”
Twenty years later, the quality of translations is still not good enough for linguists to translate documents in important fields such as law and marketing.
How does machine learning technology work?
The science and technical issues involved in machine learning are very complex. At present there are limitations that course products to fall short of the quality required at a businesses level.
Computers use algorithms to detect patterns in language and save them as a model. Examples are signposted as “labelled data” and listed in categories that are supervised or unsupervised.
Supervised algorithms apply data that has been cultivated from human interaction and apply what has been learned to new data. As we see with search engines and automated translation tools, this evolution is a slow process.
Unsupervised algorithms are not labelled but use data to explore structure and similar attributes which can then be treated. This makes up for around 10-20% of machine learning capabilities.
Machine learning is already prevalent online and for the most part do a fairly decent job if not entirely accurate. Social media news feeds and voice assisted mobile technology are both examples of machine learning.
Other examples include:
Gaming companies optimise user-behaviour to suggest upgrades that could increase player revenues
Google’s self driving car maps road networks
Email accounts use ML to detect spam
Face-detection on cameras was achieved through ML
Developers behind machine learning initiatives recognise that customisation is still required for products to be accurate. This is certainly the case of MT technology.
Science may do a lot of the leg work to make translation tasks easier to perform and increase productivity, but they have not yet solved the conundrum that makes the tools effective.
What next for translation machines?
It would make sense if translation machines were crafted by qualified linguists that followed specific rules and translated millions of words.
A project of this nature would lay the foundations for translation machines to work effectively, but it would also be extremely expensive. And despite Google yielding revenues in the region of $60bn a year, they opted for the cheaper option.
The company primarily uses its search engine to translate huge volumes of text and use a machine learning algorithm to detect patterns within language use.
Book scanning projects and professionally crafted documents have been incorporated to build a reliable database of words, sentences and contexts in an effort to improve the quality of interpretations.
Progress has been made and whilst Google Translate has a use in social circles, the automated program lacks the quality of interpretations needed by professionals.
Despite the critical flaws with machine learning algorithms, technology companies have not been deterred from launching products that assist with language translation.
Last year, Microsoft launched Skype Translator which uses voice recognition technology to take spoken words, convert them into text and translate the text into your chosen language.
The technology could revolutionise how we communicate with distant relatives that speak a foreign language and help freshman expats communicate with their friends in a new country.
Whilst voice recognition has come along in leaps and bounds in the past half decade, technology companies are no nearer perfecting translation machines now than they were two decades ago.
Corporations that have international partners may also find some use for Skype’s voice technology to conduct trans-continental meetings. But there is still the issue of how well the translation will be understood.
Case in point: Google Translate.
A thread on Quora asking: “Can Google Translate be trusted as an accurate translator?” received a unanimous “No” from respondents. Many were professional translators.
Whereas machine learning can help corporations to understand how their customers use the internet and recognize your speech patterns, the reliability of MT’s does not make the grade.
According to the newstack.io “experts warn that machine learning can’t solve two issues regardless of the predictive capacity of the new tools:
- Solving unique problems for a particular business use case, and
- Cleaning the data in the first place so that it is valuable in a machine learning workflow.”
There may be many valuable uses for machine learning, but at the moment, automated translation is not one of them.
Is there a future for translation machines?
The progress of translation technology has been slow and largely ineffective. Whether there is scope for machines to replace humans in the field of translation hangs in the balance. Opinion is divided.
The above statement by Microsoft is half-true. You can integrate translation machines into standard localisation process. But does it work?
The development of translation machines is in a transition. New translation platforms combine machine learning with manual edits performed by humans.
The latest machine learning tools enable translation systems to learn from interpretations programmed within its own eco-system rather than universal interpretations which invariably create conflicting results.
With this approach there may be scope for translation companies to make better use of machines and cut down on manual labour.
Franz Och, the lead technician on Google’s MT project believes technology will break down language barriers. “It will allow anyone to communicate with anyone else” he says.
Och may have a point, to a point. MT’s do open the doors of communication, but whether they will ever be enough for professional to use is questionable. It seems inevitable that human supervision will be required for a long time to come.
Nicholas Ostler, chairman of the Foundation of Endangered Languages sees both sides of the argument. Fluent in 26 language, Ostler believes language algorithms will eventually alleviate the need to learn foreign languages, but contests the technology lacks efficiency.
Ostler says: “Even if you don’t like what it says, you can immediately make sense of what it gives you or compare it with what you know. It still needs constructive intelligence from the user. But the fact is that it is much better than it used to be and no doubt it will continue to improve.”
It seems that ML technology will develop into a viable product that can accurately translate the context of a sentence or position the purpose of a meaning of a marketing ad.
The future may be easy to predict. But nobody knows when it will happen.
Andreas Zollmann, who has a PhD in Language Technologies, has been researching MT for years and has worked on Google Translate is not as optimistic.
“No researcher would expect it [MT] ever to become perfect,” he says. “Pronouns, say, are very difficult in some languages where the masculine and feminine don’t correspond to each other.”
Mass production achieves lower quality results and the idea that more data can be introduced to improve the systems is most likely based on a false premise.
Zollmann revealed “there isn’t much more data in the world we can use.” He believes the solution is to develop models based on the rules of languages. And to do that you will need the input of qualified linguists.
If machine translation platforms ever replace humans, the technology needs the knowledge of qualified translators to make it work effectively. Linguists therefore still have a future. For how long remains to be seen!
Tell us your thoughts about translation machines and free tools like Google translate. Do you think machine learning technologies will eventually replace linguists and eradicate the need to learn different languages?