Neural Machine Translation: what's under the hood (final)
Neural Machine Translation is gradually bridging the gap between human and machine translation. Despite the excitement around NMT, the technology isn’t perfect yet. The academic world and the large tech companies are in a race to improve the accuracy and output quality of NMT.
NMT on the rise
Many of the high-tech household names are involved in neural machine translation. Google is probably the most prominent one. As recently as November 2016, Google announced Google Neural Machine Translation (GNMT), a huge improvement to Google Translate. Only a few weeks ago, Google announced the roll-out of offline Neural Machine Translation (NMT) for 59 languages in its Google Translate apps for iOS and Android. In addition, Microsoft and Systran also launched neural machine translation systems over the past two years and Amazon launched its Amazon Translate service, allowing users to localize content for international consumers.
On the academic front, language industry website Slator stated that May 2018 marked another record in research output on Neural Machine Translation. This resulted in 55 research papers, co-authored by many of the companies mentioned above.
Room for improvement
The Encoder-Decoder architecture of NMT has led to a huge improvement in translation quality and fluency. Nevertheless, long sentences still appear to be one of the areas with room for improvement. This largely has to do with the fact that the NMT architecture encodes the input sequence to a fixed-length vector. Read our previous blog post to learn more about vectors. Hence it imposes limits on the length of input sequences that can be learned, resulting in bad translations for very long sequences.
One way of tackling this issue is by implementing attention mechanisms. Simply put, an attention mechanism enables the neural network to focus on the relevant parts of the input rather than on the irrelevant parts when doing a prediction task. Attention mechanisms in neural networks are based on how humans capture visual information or translate sentences. When translating a long sentence without using any technology, we focus more on a specific word or phrase, no matter where it is located in the input sentence. Attention models recreate this mechanism for neural networks.
NMT requires a lot of processing power, which is still one of its main drawbacks. The performance and time requirements, for example, are even greater than for statistical machine translation. This may be one of the reasons the translation industry has not fully embraced NMT yet and why the development of NMT is not going as fast as it could go. However, according to Moore’s law, processing power should double every 18 months, which again offers new opportunities for NMT in the near future.
Neural machine translation for technical documents
NMT is not perfect yet. Especially the translation of large technical documents, where translation quality is critical, is still challenging for NMT and requires human post-editing. Technical manuals usually combine text with metadata, which is hard to read for machine learning systems. We, for our part, are keeping a close eye on how the industry and the academic world are further perfecting this technology.
Meanwhile, if you have questions about what neural machine translation can do for your technical content, let us know. We will explain it to you in natural, human-like language.
- Neural Machine Translation: what's under the hood (final) Posted by Yamagata Europe posted on 31 august
- Join us at the Tekom Belgium event in Ghent Posted by Yamagata Europe posted on 30 august
- Neural Machine Translation: what’s under the hood? (part 2) Posted by Yamagata Europe posted on 2 august