Poster Presentation Technology-mediated feedback
Preliminary Research on the Effects of Generative AI Feedback on L2 Speaking Complexity, Accuracy, and Fluency During Task Repetition
This preliminary study examines how generative AI feedback influences speaking development during task repetition (TR), focusing on complexity, accuracy, and fluency (CAF). Thirty-one beginner–intermediate-level freshmen and sophomores at a Japanese junior college produced monologues of up to one minute on smartphones. Recordings were automatically transcribed using Whisper, and GPT-4o generated written feedback highlighting grammatical issues and presenting a model text. After reading the feedback, learners repeated the speaking task.
Paired-samples t-tests indicated significant changes across several CAF measures. For complexity, mean length of AS-units increased from 8.1 to 9.9 words (≈22% increase, p < .01, d = 0.65). For accuracy, the percentage of error-free AS-units improved from 62.3% to 75.9% (≈22% increase, p < .05, d = 0.40), and errors per 100 words decreased from 7.0 to 4.0 (≈43% reduction, p < .05, d = 0.49). For fluency, filled pauses decreased from 0.9 to 0.5 (≈44% reduction, p < .05, d = 0.45). Speech rate (WPM), repetitions, and self-corrections did not show significant change.
Previous task repetition research typically reports stronger gains in fluency. In contrast, these findings suggest generative AI intervention may shift learners’ attention toward linguistic form, producing larger improvements in complexity and accuracy than in fluency.
-
Yuichi Ishikawa is a faculty member at a private junior college in Tokyo, Japan. His academic interests include CALL, AI-mediated feedback, second language speaking development, and spoken grammar. His work explores how technology and pedagogy can be combined to support more effective spoken English learning.