Skip to content
ATC Logo

In a dynamically changing AI translation landscape, Linguistic Quality Evaluation (LQE) can be a powerful tool for objective assessment of Machine Translation output, and for tackling client perceptions on raw MT.

This ATC blog introduces some of the existing LQE methods and metrics, and the ways in which language technology companies are implementing them for use in MT quality evaluation.

ISO 5060

ISO standard 5060 Translation services – Evaluation of translation output provides general guidance on the evaluation of output from human translation and post-edited or unedited machine translation. ISO 5060 focuses on an analytic translation evaluation approach – a segment-based comparison of target language content against the source, using an error typology framework with error types and penalty points, to produce an objective error score and quality rating for translation output.

READ MORE

Multidimensional Quality Metrics

Multidimensional Quality Metrics (MQM) is a framework for translation quality evaluation. It can be applied to both human translation, machine translation and AI-generated translation. The MQM website provides comprehensive information, error typology level data and scorecards for the implementation of analytic translation quality evaluation.

READ MORE

TAUS Quality Estimation Guide

TAUS’ Quality Estimation Guide provides a comprehensive overview of quality evaluation harnessed to assessing the accuracy and reliability of content. Focusing on the critical areas of risk management, reducing post-editing efforts, and benchmarking machine translation engines, quality estimation technology can leverage advanced algorithms to predict the quality of machine-generated content.

READ MORE

memoQ AIQE

With memoQ AIQE, language service providers can effectively address the risks associated with the unreliable quality of machine translation. AIQE is available via two separate integrations: TAUS’ generic model that works as an out-of-the-box solution, and ModelFront’s custom model making AIQE available in memoQ.

READ MORE

RWS Trados MTQE via Language Weaver

RWS Language Weaver’s built-in MT Quality Estimation is an AI model designed to assess MT output quality, trained by RWS’ own in-house language specialists, using human intelligence to determine the quality of translation results based on real-world post-editing scenarios.

READ MORE

Bureau Works’ ChatGPT quality evaluation

In a new take on LQE, Bureau Works’ blog discusses how ChatGPT can help you evaluate translation projects at scale, quickly and accurately assess the quality of translations, identify areas that need improvement, and make the necessary corrections to enhance the overall quality of the translation.

READ MORE

Back To Top