Are Large Reasoning Models Good Translation Evaluators? Analysis and Performance Boost

Paper Github Hugging Face Collection Hugging Face Collection

Abstract

Recent advancements in large reasoning models (LRMs) have introduced an intermediate "thinking" process prior to generating final answers, improving their reasoning capabilities on complex downstream tasks. However, the potential of LRMs as evaluators for machine translation (MT) quality remains underexplored. We provides the first systematic analysis of LRM-as-a-judge in MT evaluation. We identify key challenges, revealing LRMs require tailored evaluation materials, tend to "overthink" simpler instances and have issues with scoring mechanisms leading to overestimation. To address these, we propose to calibrate LRM thinking by training them on synthetic, human-like thinking trajectories. Our experiments on WMT24 Metrics benchmarks demonstrate that this approach largely reduces thinking budgets by ~35x while concurrently improving evaluation performance across different LRM scales from 7B to 32B (e.g., R1-Distill-Qwen-7B achieves a +8.7 correlation point improvement). These findings highlight the potential of efficiently calibrated LRMs to advance fine-grained automatic MT evaluation.

Metrics

Metric/Model Avg. En-De SPA (%) En-De $Acc^*_eq$ En-Es SPA (%) En-Es $Acc^*_eq$ Ja-Zh SPA (%) Ja-Zh $Acc^*_eq$
QwQ 32B 68.3 79.8 46.8 76.1 68.0 91.9 46.9
+ ThinMQM 72.2 (+3.9) 83.2 (+3.4) 52.5 (+5.7) 80.7 (+4.6) 69.2 (+1.2) 91.3 (−0.6) 56.1 (+9.2)
R1-Distill-Llama-8B 64.9 71.8 42.9 78.5 68.0 84.7 43.5
+ ThinMQM 70.8 (+5.9) 85.5 (+13.7) 48.6 (+5.7) 81.3 (+2.8) 68.2 (+0.2) 90.5 (+5.8) 51.0 (+7.5)
R1-Distill-Qwen-7B 61.1 67.3 42.9 61.0 68.0 83.8 43.5
+ ThinMQM 69.8 (+8.7) 84.5 (+17.2) 48.5 (+5.6) 77.8 (+16.8) 68.0 (+0.0) 89.0 (+5.2) 51.3 (+7.8)

📖 Introduction

Evaluating machine translation (MT) quality is a complex task that extends beyond simple string matching. Large Reasoning Models (LRMs) are capable of modeling intricate reasoning processes, yet their role in MT evaluation remains insufficiently understood. In this work, we present a systematic investigation into the use of LRMs as evaluators for MT quality, specifically exploring their ability to replicate the Multidimensional Quality Metrics (MQM) assessment process. Our analysis across various LRMs reveals that evaluation materials must be carefully tailored, as these models tend to overanalyze simple cases and exhibit overestimation biases. To address these challenges, we introduce a simple yet effective method for calibrating LRM reasoning by training them on synthetic, human-like MQM evaluation trajectories. Our experiments show that this approach not only reduces the thinking budget required by LRMs but also enhances evaluation performance across different model scales. These findings underscore the potential of efficiently calibrated LRMs to advance fine-grained, automatic MT evaluation.


🚀 Quick Start

Installation

# Clone the repository
git clone https://github.com/NLP2CT/ThinMQM.git
cd ThinMQM

# Install dependencies
pip install -r requirements.txt

# Install mt-metrics-eval evaluation package & Prepare benchmark data
git clone https://github.com/google-research/mt-metrics-eval.git
cd mt-metrics-eval
pip install .
mkdir $HOME/.mt-metrics-eval
cd $HOME/.mt-metrics-eval
wget https://storage.googleapis.com/mt-metrics-eval/mt-metrics-eval-v2.tgz
tar xfz mt-metrics-eval-v2.tgz

Basic Usage

1. Complete Workflow for running WMT24 experiments.

# Step 1: Generate responses (using existing scripts)
# For ThinMQM model
bash scripts/run_thinmqm.sh 
# For general-purpose LRMs using GEMBA prompt
bash scripts/run_gemba.sh

# Step 2: Extract answers and run meta-evaluation
bash scripts/run_metaeval.sh

Please refer to the comments in the scripts to adjust for your environment. For hyperparameter options, see 🔧Configuration.

2. Custom Input Files.

You can evaluate your own translation data with custom input files:

Example Data:

# Run the example script to see how it works
python example_custom_evaluation.py

Example CLI Usage:

MODEL_NAME_OR_PATH="/path/to/rzzhan/ThinMQM-32B"  # Replace with your actual model path

# Set your data paths
SOURCE_FILE="cli_example_data/source.txt"
REFERENCE_FILE="cli_example_data/reference.txt"
SYSTEM_OUTPUTS_DIR="cli_example_data/system_outputs"
OUTPUT_DIR="cli_example_data/results"

SOURCE_LANG="English"
TARGET_LANG="Chinese"
TEMPLATE="thinking"  # For ThinMQM: "thinking" (32B) or "thinking_ref" (7/8B)

# Run ThinMQM evaluation
python main.py custom_thinmqm \
    --model_name="$MODEL_NAME_OR_PATH" \
    --source_file="$SOURCE_FILE" \
    --reference_file="$REFERENCE_FILE" \
    --system_outputs="$SYSTEM_OUTPUTS_DIR" \
    --output_dir="$OUTPUT_DIR" \
    --source_lang="$SOURCE_LANG" \
    --target_lang="$TARGET_LANG" \
    --template="$TEMPLATE" \
    --max_new_tokens=4096 \
    --temperature=0.6

🔧 Configuration

📁 Project Structure

├── config/                 # Configuration management
│   └── experiment_config.py
├── evaluators/             # Specific evaluator implementations
│   ├── base_evaluator.py   # Core base classes
│   ├── thinmqm_evaluator.py
│   ├── gemba_evaluator.py
│   └── meta_evaluator.py
├── utils/                  # Utility functions
│   ├── answer_extractor.py
│   ├── template_utils.py
│   ├── mqm_parser.py
│   └── process_results.py
├── scripts/                # Shell scripts for easy execution
│   ├── run_thinmqm.sh
│   ├── run_gemba.sh
│   └── run_pipeline.sh
├── main.py                 # Main entry point
└── meta_eval_pipeline.md   # Meta-evaluation entry point

Model & Data Card

Released Models HF Model Template Trained Dataset
rzzhan/ThinMQM-32B https://huggingface.co/rzzhan/ThinMQM-32B thinking https://huggingface.co/datasets/rzzhan/ThinMQM-12k/ thinmqm12k_src
rzzhan/ThinMQM-8B https://huggingface.co/rzzhan/ThinMQM-8B thinking_ref https://huggingface.co/datasets/rzzhan/ThinMQM-12k/ thinmqm12k_ref
rzzhan/ThinMQM-7B https://huggingface.co/rzzhan/ThinMQM-7B thinking_ref https://huggingface.co/datasets/rzzhan/ThinMQM-12k/ thinmqm12k_ref

Recommended decoding with temperature=0.6, top_p=0.95.

ThinMQM Model Templates

  • thinking: Source + translation evaluation
  • thinking_ref: Source + reference + translation evaluation

GEMBA Templates

  • src: Source + translation evaluation
  • ref: Reference + translation evaluation
  • joint: Source + reference + translation evaluation

📊 Meta-Evaluation

ThinMQM reduces thinking budgets while improving the evaluation performance of LRMs at different model scales.

meta-eval

✨ Acknowledgments

We thank the open-source community for the excellent tools and libraries that made this work possible, including:


📬 Contact

For questions, feedback, or collaboration opportunities, feel free to reach out:


📄 License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.


📝 Citation

If you find our model, data, or evaluation code useful, please kindly cite our paper:

@article{zhan2025thinmqm,
      title={Are Large Reasoning Models Good Translation Evaluators? Analysis and Performance Boost}, 
      author={Zhan, Runzhe and Huang, Zhihong and Yang, Xinyi and Chao, Lidia S and Yang, Min and Wong, Derek F},
      year={2025},
      journal = {ArXiv preprint},
      volume = {2510.20780},
      url={https://arxiv.org/abs/2510.20780}, 
}
Downloads last month
20
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rzzhan/ThinMQM-8B

Quantizations
2 models

Dataset used to train rzzhan/ThinMQM-8B

Collection including rzzhan/ThinMQM-8B