Is GPT-4 Alone Sufficient For Automated Essay Scoring?: A Comparative Judgment Approach Based On Rater Cognition | Awesome LLM Papers

Is GPT-4 Alone Sufficient For Automated Essay Scoring?: A Comparative Judgment Approach Based On Rater Cognition

Seungju Kim, Meounggun Jo Β· L@S '24: Eleventh ACM Conference on Learning @ Scale Β· 2024

Large Language Models (LLMs) have shown promise in Automated Essay Scoring (AES), but their zero-shot and few-shot performance often falls short compared to state-of-the-art models and human raters. However, fine-tuning LLMs for each specific task is impractical due to the variety of essay prompts and rubrics used in real-world educational contexts. This study proposes a novel approach combining LLMs and Comparative Judgment (CJ) for AES, using zero-shot prompting to choose between two essays. We demonstrate that a CJ method surpasses traditional rubric-based scoring in essay scoring using LLMs.

Similar Work
Loading…