Academic

Evaluating Austrian A-Level German Essays with Large Language Models for Automated Essay Scoring

arXiv:2603.06066v1 Announce Type: new Abstract: Automated Essay Scoring (AES) has been explored for decades with the goal to support teachers by reducing grading workload and mitigating subjective biases. While early systems relied on handcrafted features and statistical models, recent advances in Large Language Models (LLMs) have made it possible to evaluate student writing with unprecedented flexibility. This paper investigates the application of state-of-the-art open-weight LLMs for the grading of Austrian A-level German texts, with a particular focus on rubric-based evaluation. A dataset of 101 anonymised student exams across three text types was processed and evaluated. Four LLMs, DeepSeek-R1 32b, Qwen3 30b, Mixtral 8x7b and LLama3.3 70b, were evaluated with different contexts and prompting strategies. The LLMs were able to reach a maximum of 40.6% agreement with the human rater in the rubric-provided sub-dimensions, and only 32.8% of final grades matched the ones given by a huma

J
Jonas Kubesch, Lena Huber, Clemens Havas
· · 1 min read · 14 views

arXiv:2603.06066v1 Announce Type: new Abstract: Automated Essay Scoring (AES) has been explored for decades with the goal to support teachers by reducing grading workload and mitigating subjective biases. While early systems relied on handcrafted features and statistical models, recent advances in Large Language Models (LLMs) have made it possible to evaluate student writing with unprecedented flexibility. This paper investigates the application of state-of-the-art open-weight LLMs for the grading of Austrian A-level German texts, with a particular focus on rubric-based evaluation. A dataset of 101 anonymised student exams across three text types was processed and evaluated. Four LLMs, DeepSeek-R1 32b, Qwen3 30b, Mixtral 8x7b and LLama3.3 70b, were evaluated with different contexts and prompting strategies. The LLMs were able to reach a maximum of 40.6% agreement with the human rater in the rubric-provided sub-dimensions, and only 32.8% of final grades matched the ones given by a human expert. The results indicate that even though smaller models are able to use standardised rubrics for German essay grading, they are not accurate enough to be used in a real-world grading environment.

Executive Summary

This article investigates the application of Large Language Models (LLMs) for Automated Essay Scoring (AES) in evaluating Austrian A-level German texts. The authors processed a dataset of 101 anonymized student exams and evaluated four state-of-the-art LLMs with different contexts and prompting strategies. The results indicate that while the LLMs can reach up to 40.6% agreement with human raters in rubric-provided sub-dimensions, they are not accurate enough to be used in a real-world grading environment. The study highlights the limitations of current LLMs in accurately grading student writing, particularly in standardized rubric-based evaluations. The findings suggest that further research is needed to improve the accuracy and reliability of LLMs in AES.

Key Points

  • The application of LLMs for AES in evaluating Austrian A-level German texts is explored.
  • Four state-of-the-art LLMs were evaluated with different contexts and prompting strategies.
  • The results indicate that LLMs are not accurate enough for real-world grading environments.

Merits

Contributions to the field of AES

The study provides valuable insights into the limitations and potential of LLMs in AES, contributing to the ongoing debate on the use of AI in educational assessment.

Demerits

Limited generalizability

The study's findings may not be generalizable to other languages, educational contexts, or types of student writing, limiting the broader implications of the research.

Methodological limitations

The use of a relatively small dataset and limited evaluation strategies may have affected the study's results and conclusions, highlighting the need for further methodological refinement.

Expert Commentary

The study's findings are significant, as they highlight the limitations of current LLMs in accurately grading student writing. While the results are not surprising, given the complexity and nuance of human language, they underscore the need for further research and development in AES. The use of LLMs in educational assessment raises important questions about bias, fairness, and the role of human evaluation. As AI-generated assessments become increasingly prevalent, it is essential to address these concerns and ensure that AI is used in a way that supports, rather than undermines, educational equity and excellence.

Recommendations

  • Further research is needed to develop more accurate and reliable LLMs for AES, with a focus on improving the models' ability to handle nuance and context.
  • There is a need for careful consideration and regulation of the use of AI-generated assessments in education, to ensure that AI is used in a way that supports educational equity and excellence.

Sources