Treating MTPE as just a brief manual revision of the automatically generated translation before the end user delivery would never let us achieve the results mentioned by eBay’s Senior Director of Machine Translation and Geo Expansion, Hassan Sawaf: “As we’ve rolled out our MT capabilities, and even before a lot of the education and outreach we plan to do, we’ve quickly increased the number of Russian users we see using these features by 50 %”.
For Russian, which is an inflectional language with a complex structure and high morphological demands, every single MT segment has to undergo a multi-stage processing sequence with several linguists working on it in turns.
– Together with the client we developed a comprehensive guideline on the language specific conventions of MTPE, and made sure that each member of the team was adhering to it. This guide was continuously being updated.
– Each MT-segment was processed with a usual 4-step TEP+QA worfkflow, modified for the project: MT-editor/2nd editor/proofreader/QA specialist performing automated checks.
– Resources for each of these steps were tested through a special procedure, adapted for the project specifics. (The content we worked on was meant not for human users, but for the MT engine’s training.)
So this ride was even more complex than a standard localization cycle. Then why bother complicating time-tested processes, get paid less, and not just “translate from scratch”, as usual? The answer is, MT is not as black as it is sometimes painted. Judging from the experience with such major accounts as eBay, Cisco and Dell, we do believe that MT is good. But it is certainly not yet capable of replacing HT: if our goal is the client’s satisfaction, there’s always a job to do for human experts.
We especially doubt that any MT engine can be trained well enough to produce a near final-quality translation for Russian and other complex languages. As recent article by Memsourcestates, “Russian, Polish and Korean have lower MT leverage rates, below 40% or even 20% fuzzy matches and 5% complete matches.”
Back to the eBay case, we think the 50% increase in the number of Russian users was achievedmostly because the content was translated. And although MT is not a universal remedy, implementing it played the key role in the success of this particular case.
In many other cases, it’s better to have no translation than a poor one (which raw MT output usually constitutes).
For a lower standard often referred to as “fit for purpose”, light PE may suffice, which aims to make the MT output “simply understandable”. However, in our 5+ years of MTPE practice we’ve never faced an actual project with Light MTPE demands. On the contrary, those of our clients who utilise MT, tend to present some of the highest quality expectations. This is probably because they’re putting so much effort into MT deployment, including engine training, output evaluation, analytics, statistics, not to mention the actual PE work for each language involved. Consequently, highest quality is expected.