Deep Learning - European project QT21 tops international Machine Translation competition for the second year running

Language technologies , Machine Translation , Research
QT21 Neural Translation

QT21, a Machine Translation research project which is funded by the European Commission and coordinated by the German Research Center for Artificial Intelligence (DFKI), has again reached a significant milestone this summer: for the second time in a row, QT21 ranked first on more than 80% of the translation tasks organized as benchmarking by the Conference on Machine Translation(WMT17), winning all tests on those languages considered complex like Latvian or Chinese.

WMT started in 2006 as a Workshop on Statistical Machine Translation (MT) with the goal to provide a well-defined and controlled framework which allows an evaluation and consequently an improvement in MT technologies. MT techniques are evaluated along different dimensions, called “shared tasks.” Each shared task involves the creation and distribution of training data, the creation of test data, the definition of an evaluation protocol, the infrastructure to collect participant submissions, as well as automatic and manual scoring of submissions. It ends with the publication of results for each task and a ranking between all submissions that were made.

The central challenge is the News Translation task. In this task, each participant receives a test set of source-sentences for a given language pair (e.g. English – German) which has to be translated by their system into the target language. To make the competition as realistic as possible, the texts are extracted from general newspapers in the different languages of the task. For a better comparability of technologies and a reduction of the bias on data, participants to WMT get the same training data on which they can train their system. In addition, WMT includes online systems like Google Translate or Bing Translator (which of course do not follow the data constraint), systematically adding their translations to the test set and thus enabling a comparison with the largest MT systems.

The evaluation of the MT systems is done in two ways: an automatic scoring that is based on a pre-defined metric like TER and BLEU, and the manual scoring which is one by collecting large amounts of subjective judgments on translation quality from human annotators.

The WMT shared tasks competition are open to every research lab or commercial system in the world. By making their submissions, they have the opportunity to assess MT technologies and to compare approaches. Furthermore, the knowledge is shared in system description papers, and published in an overall findings paper.

The WMT 2016 conference brought a paradigm shift with the introduction of deep neural networks. QT21 took this emerging technology and enhanced it to create a new state of the art, winning all translation tasks and bringing large improvements for morphologically rich languages like German and Czech. Notably, QT21 systems outperformed online systems like Google Translate and Bing Translator.

This achievement was possible due to the training of deep neural networks in conjunction with a pre-processing step that creates artificial sub-word units or segments based on the Gage byte pair encoding compression algorithm (1994), in which, instead of merging frequent pairs of bytes, characters or character sequences are merged. Two months after this 2016 historical breakthrough, online translation systems like Google Translate also shifted to neural networks and made widely-publicized announcements about their improvements, while acknowledging the work done in Europe: in their list of the six most relevant papers to their improvement was a major QT21 paper.

At this year’s WMT17 (7.-8. September 2017) shared tasks competition, QT21 successfully managed to keep its pole position on almost all language pairs. As seen in figure 1, online systems in 2017 are catching up, but the huge amount of data they can train their neural networks on is not enough to help them outperform the QT21 technology advantage on the WMT test data.

QT21 stands for Quality Translation 21, 21 being the 22 European Languages minus English. The goal of this Research and Innovation (RIA) project is to improve the quality of translation technologies on the difficult languages, e.g. those that are morphologically rich (e.g. German, Czech), those where word order is free (e.g. Czech) and also for those languages with less resources (e.g. Latvian, Romanian). The QT21 partners include top research teams in Europe as well as Language Technology and Translation Services companies. The project is coordinated by DFKI’s research department Multilingual Technologies, which is headed by Prof. Dr. Josef van Genabith.

QT21 partners:

- Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI), Germany

- Rheinisch-Westfälische Technische Hochschule Aachen (RWTH), Germany

- Universiteit van Amsterdam (UvA), Netherlands

- Dublin City University (DCU), Ireland

- University of Edinburgh (UEDIN), United Kingdom

- Karlsruher Institut für Technologie (KIT), Germany

- Centre National de la Recherche Scientifique (CNRS), France,

- Univerzita Karlova v Praze (CUNI), Czech Republic

- Fondazione Bruno Kessler (FBK), Italy

- University of Sheffield (USFD), United Kingdom

- TAUS b.v. (TAUS), Netherlands

- text & form GmbH (TAF), Germany


- Hong Kong University of Science and Technology (HKUST), Hong Kong


More information: