#4687

Presentation Classroom application of CALL

Identifying LLM-Generated Writing Through Authorship Familiarity

Time not set

Large language models (LLMs) have created new opportunities for writing support while simultaneously challenging the integrity of text-based educational assessment. Existing authorship verification methods, such as stylometric analyses and automated classifiers, provide probabilistic judgments that may allow plausible deniability for claimed authors. However, true authors are typically familiar with their text in ways that surrogate authors are not. Building on this insight, the present study introduces the Content Restoration Authorship Familiarity Test (CRAFT), which assesses authorship by asking claimed authors to recall and reconstruct elements of texts they identify as their own.

The CRAFT battery was piloted with 60 university students in Seoul. Participants wrote a 16-sentence handwritten text in class. An LLM then generated a second text based on that content. About 30 minutes later, participants completed two CRAFT tests, one for each text. In both texts, four sentences were inserted and five words replaced with synonyms, and participants attempted to identify or restore the original wording. Responses were scored on a 14-point rubric allowing partial credit for morphologically related forms.

Descriptive analyses showed non-overlapping performance distributions between human-authored and LLM-generated conditions, suggesting authorship familiarity can provide a reliable behavioral signal for distinguishing genuine authorship from AI-assisted text generation.