#4514

Presentation Technology-mediated feedback

AI-Mediated Feedback in L2 Writing: Evidence from a Classroom-Based Complexity Study

Time not set

Interest in generative AI has prompted debate about its role in second language (L2) writing instruction, particularly in relation to feedback practices. While prior research has focused on learner perceptions (Sun & Mizumoto, 2025) or global writing quality (Muñoz, Nassaji, & Bello Carrillo, 2025), relatively few classroom-based studies have examined how AI-mediated feedback affects specific dimensions of syntactic development.

This study reports findings from a 15-week quasi-experimental study conducted in a Japanese university EFL writing course, focusing on sentence-level syntactic complexity. Students completed timed writing tasks on identical topics throughout the semester and were assigned to one of four conditions: (1) AI-assisted revision using a custom-designed feedback tool, (2) instructor-provided feedback, (3) self-correction, or (4) a no-feedback control group. Syntactic complexity was measured using established indices, including mean length of sentence (MLS), mean length of T-unit (MLT), and mean length of clause (MLC).

Results showed a significant interaction between time and group for MLT, with the AI-assisted group demonstrating greater gains in sentence-level elaboration than the instructor-feedback and control groups. No comparable advantages were observed for subordination-based measures. These findings suggest that AI-mediated feedback can function as a targeted post-instructional scaffold supporting specific dimensions of syntactic development in L2 writing.