Skip to main content
Academic

From Performance to Purpose: A Sociotechnical Taxonomy for Evaluating Large Language Model Utility

arXiv:2602.20513v1 Announce Type: new Abstract: As large language models (LLMs) continue to improve at completing discrete tasks, they are being integrated into increasingly complex and diverse real-world systems. However, task-level success alone does not establish a model's fit for use in practice. In applied, high-stakes settings, LLM effectiveness is driven by a wider array of sociotechnical determinants that extend beyond conventional performance measures. Although a growing set of metrics capture many of these considerations, they are rarely organized in a way that supports consistent evaluation, leaving no unified taxonomy for assessing and comparing LLM utility across use cases. To address this gap, we introduce the Language Model Utility Taxonomy (LUX), a comprehensive framework that structures utility evaluation across four domains: performance, interaction, operations, and governance. Within each domain, LUX is organized hierarchically into thematically aligned dimensions a

G
Gavin Levinson, Keith Feldman
· · 1 min read · 0 views

arXiv:2602.20513v1 Announce Type: new Abstract: As large language models (LLMs) continue to improve at completing discrete tasks, they are being integrated into increasingly complex and diverse real-world systems. However, task-level success alone does not establish a model's fit for use in practice. In applied, high-stakes settings, LLM effectiveness is driven by a wider array of sociotechnical determinants that extend beyond conventional performance measures. Although a growing set of metrics capture many of these considerations, they are rarely organized in a way that supports consistent evaluation, leaving no unified taxonomy for assessing and comparing LLM utility across use cases. To address this gap, we introduce the Language Model Utility Taxonomy (LUX), a comprehensive framework that structures utility evaluation across four domains: performance, interaction, operations, and governance. Within each domain, LUX is organized hierarchically into thematically aligned dimensions and components, each grounded in metrics that enable quantitative comparison and alignment of model selection with intended use. In addition, an external dynamic web tool is provided to support exploration of the framework by connecting each component to a repository of relevant metrics (factors) for applied evaluation.

Executive Summary

This article proposes the Language Model Utility Taxonomy (LUX), a comprehensive framework for evaluating the utility of large language models (LLMs) in applied, high-stakes settings. LUX structures utility evaluation across four domains: performance, interaction, operations, and governance, each organized hierarchically into thematically aligned dimensions and components. The framework enables quantitative comparison and alignment of model selection with intended use, addressing the gap in unified taxonomies for assessing and comparing LLM utility. The authors provide an external dynamic web tool to support exploration of the framework and connection to relevant metrics for applied evaluation.

Key Points

  • The LUX framework structures utility evaluation across four domains: performance, interaction, operations, and governance.
  • Each domain is organized hierarchically into thematically aligned dimensions and components.
  • The framework enables quantitative comparison and alignment of model selection with intended use.

Merits

Strength in Addressing a Critical Gap

The LUX framework addresses the critical gap in unified taxonomies for assessing and comparing LLM utility, enabling more informed model selection and deployment in applied, high-stakes settings.

Demerits

Limited Scope on Model Training and Development

The LUX framework primarily focuses on evaluating LLM utility in applied settings, leaving limited consideration for model training and development processes, which are crucial for ensuring model fairness, transparency, and accountability.

Expert Commentary

The LUX framework is a significant contribution to the field of AI and machine learning, addressing a critical gap in unified taxonomies for evaluating LLM utility. While the framework is comprehensive and well-structured, it is essential to consider the limitations and potential biases in LLM training and development processes. The article's emphasis on sociotechnical determinants and metrics for evaluating LLM utility underscores the need for ongoing research and development in these areas. As LLMs become increasingly integrated into various industries, the LUX framework provides a crucial tool for informing model selection and deployment decisions, ensuring that these models are fair, transparent, and accountable.

Recommendations

  • Future research should focus on developing more nuanced and contextualized metrics for evaluating LLM utility, taking into account the specific needs and requirements of different industries and use cases.
  • Developers and deployers of LLMs should adopt the LUX framework as a standard for evaluating and comparing LLM utility, ensuring that these models are deployed in a responsible and accountable manner.

Sources