Papers
arXiv:2406.16510

Large Language Models in Student Assessment: Comparing ChatGPT and Human Graders

Published on Jun 24, 2024
Authors:

Abstract

Large language models like GPT-4 can grade master-level essays with accuracy in mean scores but show risk-averse patterns and low interrater reliability compared to human graders, indicating limitations in adapting to nuanced grading criteria.

AI-generated summary

This study investigates the efficacy of large language models (LLMs) as tools for grading master-level student essays. Utilizing a sample of 60 essays in political science, the study compares the accuracy of grades suggested by the GPT-4 model with those awarded by university teachers. Results indicate that while GPT-4 aligns with human grading standards on mean scores, it exhibits a risk-averse grading pattern and its interrater reliability with human raters is low. Furthermore, modifications in the grading instructions (prompt engineering) do not significantly alter AI performance, suggesting that GPT-4 primarily assesses generic essay characteristics such as language quality rather than adapting to nuanced grading criteria. These findings contribute to the understanding of AI's potential and limitations in higher education, highlighting the need for further development to enhance its adaptability and sensitivity to specific educational assessment requirements.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.16510 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.16510 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.16510 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.