Things are changing fast in the world of translation technology. As each year passes, improvements in computational capacity, AI and data analysis expand upon what is already possible in terms of both speed and accuracy of machine translation. One of the latest in the line of new technology is neural machine translation (NMT), a deep-learning system that reportedly reduces translation errors by an average of 60%.
This newest development in machine translation has grabbed the special attention of tech giants like Google who have already submitted a patent for their own branded version of NMT. Still in the early stages, the Google Neural Machine Translation (GNMT) currently only works with the Chinese-English language pair, with more coming down the pipeline.
But how does neural machine translation work and what does it mean for the future translation technology? Much like NMT itself, the answer to that question is a little complicated.
Neural Machine Translation Works in Mysterious Ways
Whereas previous forms of machine translation were rule based (RBMT) or otherwise phrase based (PBMT), neural machine translation makes the process look less like a computer and more like a human.
NMT is designed to imitate the neurons of the human brain. Neurons can make connections, learn new information and can assess input as a whole rather than part by part.
This process of neural machine translation does not generate sentences into a target language as do RBMT and PBMT. Rather NMT performs an analysis in two stages, encoding and decoding. In the encoding stage, source language text is fed to the machine then sorted into a series of linguistic “vectors.” The decoding stage then transfers these vectors into the target language with no generation step (you can read more here).
If this sounds opaque, that’s because neural machine translation is not very well understood, even by those within the tech industry. NMT has been described as a mysterious process, and its inner workings have been compared to a black box. Like the human brain it’s modeled after, just exactly how NMT and other deep learning programs work is a bit baffling.
As the computing power NMT requires becomes more widely available and more research is performed, these gaps in understanding will hopefully be filled in. But the road to more widely available NMT may be a long one.
A Long Road Ahead
Exactly when neural machine translation will be viable enough for wider distribution can’t be determined. It takes time for systems that complex to learn new language pairs and how to further improve the accuracy of NMT is still being figured out by developers and researchers.
Beyond that, it will be a long time before NMT has the full capacity to replace human translators, according to experts. Before that happens, scientists have to learn how to finally make artificial intelligence machines how to not just compute or perceive but to cognize, or how to truly think. Until then machine translation, no matter how sophisticated, cannot match the accuracy of people.
In the mean time, we have the technology available today, which is continuously developing. Tracking the progress of NMT technology will be a telling indicator of where humans stand in creating fully capable, human-like AI. But for now, we’ll just have to wait and see.