Academic

Beyond LLM-as-a-Judge: Deterministic Metrics for Multilingual Generative Text Evaluation

arXiv:2604.05083v1 Announce Type: new Abstract: While Large Language Models (LLMs) are increasingly adopted as automated judges for evaluating generated text, their outputs are often costly, and highly sensitive to prompt design, language, and aggregation strategies, severely, which limits reproducibility. To address these challenges, we propose \textbf{\textit{OmniScore}}, a family of complementary, deterministic learned metrics developed using small size ($<$1B) parameter models. OmniScore approximates LLM-judge behavior while preserving the low latency and consistency of traditional model-based scoring. We trained the models large-scale synthetic supervision ($\sim$564k instances, in \textbf{107 languages}) and evaluated using 8,617 manually annotated instances. The OmniScore family supports reliable, multi-dimensional scores across a variety of settings, including reference-based, source-grounded, and hybrid evaluations. We evaluate these models across question answering (QA), tra

arXiv:2604.05083v1 Announce Type: new Abstract: While Large Language Models (LLMs) are increasingly adopted as automated judges for evaluating generated text, their outputs are often costly, and highly sensitive to prompt design, language, and aggregation strategies, severely, which limits reproducibility. To address these challenges, we propose \textbf{\textit{OmniScore}}, a family of complementary, deterministic learned metrics developed using small size ($<$1B) parameter models. OmniScore approximates LLM-judge behavior while preserving the low latency and consistency of traditional model-based scoring. We trained the models large-scale synthetic supervision ($\sim$564k instances, in \textbf{107 languages}) and evaluated using 8,617 manually annotated instances. The OmniScore family supports reliable, multi-dimensional scores across a variety of settings, including reference-based, source-grounded, and hybrid evaluations. We evaluate these models across question answering (QA), translation, and summarization in \textbf{6 languages}. Our results demonstrate that lightweight, deterministic learned metrics provide a highly practical and scalable alternative to frontier LLMs. Our models and datasets can be found at https://huggingface.co/collections/QCRI/omniscore

Executive Summary

The article introduces OmniScore, a deterministic, multi-dimensional evaluation framework that leverages lightweight learned metrics (<1B parameters) to approximate the performance of Large Language Model (LLM) judges while addressing key reproducibility and scalability challenges. Trained on 564k synthetic instances across 107 languages and evaluated on 8,617 manually annotated instances, OmniScore demonstrates robust performance in QA, translation, and summarization tasks across six languages. Unlike LLM-based evaluators, OmniScore ensures low latency, consistency, and cost-efficiency, offering a practical alternative for automating generative text evaluation without sacrificing reliability. The authors provide open-access models and datasets on Hugging Face, facilitating broader adoption and further research in multilingual text evaluation.

Key Points

  • OmniScore is a deterministic, learned metric framework designed to replicate LLM-judge behavior while overcoming reproducibility and cost constraints.
  • Trained on a massive multilingual dataset (564k instances, 107 languages) with validation on 8,617 human-annotated instances, ensuring broad applicability.
  • Supports multi-dimensional evaluations (reference-based, source-grounded, hybrid) across QA, translation, and summarization in six languages, outperforming traditional model-based scoring in scalability and efficiency.

Merits

Scalability and Efficiency

OmniScore’s lightweight architecture (<1B parameters) ensures low computational costs and high inference speed, making it feasible for large-scale deployment compared to frontier LLMs.

Reproducibility and Consistency

Unlike LLM judges, which are sensitive to prompt design and aggregation strategies, OmniScore delivers deterministic outputs, enhancing reliability and repeatability in evaluations.

Multilingual Robustness

The model’s training on 107 languages and evaluation in six languages demonstrates its capability to handle diverse linguistic contexts, a critical advantage for global applications.

Cost-Effectiveness

By avoiding the high inference costs associated with LLMs, OmniScore provides a financially sustainable solution for automated text evaluation in research and industry.

Demerits

Limited Generalization to Low-Resource Languages

While trained on 107 languages, the evaluation in only six languages may not fully capture performance in low-resource or underrepresented linguistic contexts, potentially introducing bias.

Dependency on Synthetic Data

The reliance on synthetic supervision (564k instances) may limit the model’s ability to capture nuanced, real-world linguistic variations not represented in training data.

Fixed Evaluation Dimensions

The deterministic nature of OmniScore may restrict flexibility in adapting to novel or evolving evaluation criteria, unlike dynamic LLM-based judges that can incorporate new dimensions post-training.

Potential Overfitting to Specific Tasks

The evaluation focuses on QA, translation, and summarization, leaving open questions about performance in other generative tasks (e.g., creative writing, code generation) or domain-specific applications.

Expert Commentary

The introduction of OmniScore represents a significant advancement in the field of automated text evaluation, particularly in addressing the reproducibility and scalability challenges posed by LLM-based judges. The authors’ emphasis on deterministic metrics is commendable, as it aligns with the growing demand for transparent and reliable AI systems. However, the reliance on synthetic data and the limited evaluation scope in six languages warrant caution. Future work should explore the model’s performance in low-resource languages and its adaptability to novel evaluation dimensions. Additionally, while OmniScore offers a practical solution for current tasks, the rapid evolution of generative AI may necessitate periodic updates to evaluation frameworks to keep pace with emerging capabilities. The open-access release of the models and datasets is a laudable step toward fostering community-driven improvements and broader adoption.

Recommendations

  • Conduct further evaluations of OmniScore in low-resource languages and underrepresented linguistic contexts to ensure robustness and fairness across diverse languages.
  • Expand the evaluation framework to include additional generative tasks (e.g., code generation, creative writing) and domain-specific applications to assess generalizability beyond QA, translation, and summarization.

Sources

Original: arXiv - cs.CL