Sessions / Technology-mediated feedback

Preliminary Research on the Effects of Generative AI Feedback on L2 Speaking Complexity, Accuracy, and Fluency During Task Repetition #4611

Time Not Set

This preliminary study examines how generative AI feedback influences speaking development during task repetition (TR), focusing on complexity, accuracy, and fluency (CAF). Thirty-one beginner–intermediate-level freshmen and sophomores at a Japanese junior college produced monologues of up to one minute on smartphones. Recordings were automatically transcribed using Whisper, and GPT-4o generated written feedback highlighting grammatical issues and presenting a model text. After reading the feedback, learners repeated the speaking task.

Paired-samples t-tests indicated significant changes across several CAF measures. For complexity, mean length of AS-units increased from 8.1 to 9.9 words (≈22% increase, p < .01, d = 0.65). For accuracy, the percentage of error-free AS-units improved from 62.3% to 75.9% (≈22% increase, p < .05, d = 0.40), and errors per 100 words decreased from 7.0 to 4.0 (≈43% reduction, p < .05, d = 0.49). For fluency, filled pauses decreased from 0.9 to 0.5 (≈44% reduction, p < .05, d = 0.45). Speech rate (WPM), repetitions, and self-corrections did not show significant change.

Previous task repetition research typically reports stronger gains in fluency. In contrast, these findings suggest generative AI intervention may shift learners’ attention toward linguistic form, producing larger improvements in complexity and accuracy than in fluency.

Designing AI-Mediated Feedback Activities for Academic Writing #4619

Time Not Set

In recent years, generative AI tools such as ChatGPT have become increasingly present in EFL and ESL writing classrooms, leading many instructors to consider how these tools can be used effectively and ethically to support student writing. This poster describes the design and implementation of two ChatGPT-based feedback activities used in an academic writing course for first-year Japanese university students in an intensive English program. Taking an iterative approach, the second activity was developed in response to student feedback from the first. In both activities, students first wrote their essay drafts by hand in class without the use of AI or other assistance, and then entered a common prompt to ChatGPT to elicit corrective feedback on their grammar and vocabulary use. Students reviewed the AI-generated suggestions and applied selected feedback during guided revision. The poster outlines this process and draws on data from post-task student surveys to highlight student perceptions of effectiveness, areas of difficulty, and practical considerations for classroom use. Participants are invited to reflect on how this approach can be further refined to integrate generative AI into writing instruction while maintaining a human element and supporting the development of students’ writing skills and voice.

Giving oral feedback on recorded presentations using Loom #4621

Time Not Set

Loom is a video messaging tool with screen recording capabilities. They offer a free account for educators, with the capacity to record simultaneously from your camera and screen. In education, teachers can utilize it in various ways, including providing authentic recorded feedback on their students’ oral presentations or recorded presentations. In my teaching context, students are generally of low language ability, despite taking my Elective Communicative English class. They often get very nervous about short speaking tests and often blank out. I tried a new test format, having them record their speaking. I then play their recording on my screen, stopping their recording to give authentic feedback. I then send them the link to the video. Students reported feeling less stressed. They also liked the individual feedback on pronunciation, their use of English, and the content. In this presentation, participants will be introduced to the basics of Loom, how to make a free educator account, and an example of how I used Loom. Other possible uses of Loom will also be discussed. It is hoped that participants will benefit from the session by discovering a new digital tool that could be helpful in their teaching contexts.

Between Curiosity and Resistance: AI‑Assisted Writing Feedback Before Curricular Adoption #4625

Time Not Set

Many language programs are debating whether and how to use AI-assisted feedback in writing instruction as these tools become common in higher education. This talk reports the preliminary phase of a larger project examining the pre-adoption landscape of AI feedback on student writing at an English-medium instruction university. We focus on students' informal AI practices before curricular integration, faculty beliefs, and institutional uncertainty. Guided by sociocultural theory, we use a qualitative-dominant mixed-methods design: semi-structured faculty interviews, institutional document analysis, and student surveys on out-of-class AI use for writing and feedback. Interview and document data are analyzed thematically and through discourse analysis; survey data are summarized with descriptive statistics. Preliminary results suggest distinct faculty stance profiles, widespread but uneven student use of AI-generated comments outside instruction, and significant discrepancies between the instructor's goals and students' actions. The study shows how AI becomes a contested mediating tool before formal guidelines are established and provides CALL researchers and practitioners with empirically grounded insight into the circumstances, conflicts, and moral issues that influence AI adoption, informing more feasible and pedagogically sound AI-mediated feedback interventions.

Using NotebookLM as a Reflective Tool for Oral Fluency Development #4629

Time Not Set

This presentation introduces a classroom-ready approach to using NotebookLM as a reflective tool to support the development of oral fluency in a university EFL context. Students record short audio journals several times per week, producing regular, low-stakes spoken output. They upload either a transcript or the audio file itself, which NotebookLM can transcribe; the resulting transcript serves as the basis for analysis. Best practices for generating accurate transcripts will be briefly outlined. While audio files may be uploaded directly, pre-generated transcripts are typically faster and more efficient for classroom use. Rather than requesting general corrections, students ask for feedback focused on recurring grammatical errors across their output. NotebookLM can analyze a single journal, a specific week, or all accumulated transcripts, allowing learners to control the scope of feedback. This flexibility helps students identify recurring problem areas, track changes over time, and select actionable points for reflection. By analyzing transcripts of their own spoken language, learners engage in metalinguistic reflection grounded in authentic L2 output. NotebookLM does not replace instruction or speaking practice but serves as a reflective guide, supporting a more intentional path toward oral fluency.

Ways to use AI for writing classes without just copy/paste, and student survey results #4632

Time Not Set

AI chatbots have been shown to have positive effects on students' learning outcomes mainly due to the delivery of quick feedback, yet other studies have found that both students and educators have mixed perceptions of AI feedback, preferring it in supplementary form alongside educator-delivered feedback. How can teachers teach AI usage, give both kinds of feedback, and develop AI critical thinking? In this presentation, a task will be explained which attempts to tackle this problem. Participants (N=25) were second-year university students taking a required English writing class, for one 15-week semester. In week 1 (W1), students were taught how to use ChatGPT efficiently. They then completed weekly writing tasks during W2-W13, where they were required to make a mind map, write for 10 minutes without technology, then edit their writing with AI. The teacher also provided individual and general class-wide feedback. In W14-W15, a survey was conducted regarding AI instruction, usage, and opinions and preferences on both types of feedback. The survey results indicated positive perceptions of using AI, interest in more instruction and usage of AI for writing, as well as confirming preference for both AI alongside educator feedback.

Utilizing An AI Chatbot to Assist with the Development of Learner Self-Regulation #4636

Time Not Set

In this presentation, a project investigating the use of a generative AI chatbot to support development of self-regulated learning (SRL) skills among university EFL learners will be discussed. Core SRL processes—including goal setting, planning, and strategy selection—often require co-regulation and scaffolding from instructors. However, this can potentially undermine learner autonomy, while variables such as class size and schedules can make it difficult to provide assistance when needed. One alternative is shifting the locus of co-regulation to AI systems capable of generating context-specific learning goals and study plans, allowing students to generate bespoke goals tailored to individual needs and interests. Using the poe.com platform, a chatbot was configured and implemented to guide students in creating, revising, and refining learning goals. In parallel, students maintained study logs to document progress, identify obstacles, and adjust goals or strategies in response to difficulties encountered. Initial findings suggest that students perceived the chatbot as helpful for goal generation and initially engaged with its recommendations, although engagement decreased over time. Mixed-methods data drawing on sources including chatbot records, study logs and a reflection questionnaire will be presented to illustrate how AI-supported interventions may be effectively tailored to the needs of language learners.

AI Feedback Pilot: Science Majors' English Proficiency with CBI Content #4644

Time Not Set

Science university students show moderate willingness to improve English proficiency yet hesitate to speak due to performance anxiety. However, instructors often rely on subjective evaluations, limiting data-driven instruction amid tight budgets and time shortages. Surveys reveal demand for field-aligned texts matching proficiency levels, while instructors prioritize preferences or school requirements.Beginning in April, this study applies objective AI evaluations so students know where to improve. English Language Speech Assistant (ELSA) for Schools measures pronunciation, writing, and comprehension gains using science materials.Two second-year science classes complete progress monitoring from Week 1 to Week 16 across English skills.Weekly ELSA feedback supports ongoing language practice through custom tasks. These tasks, tailored to science, technology, engineering, and mathematics (STEM) content through content-based instruction (CBI), engage discipline-specific interest with speaking practice for presentations and optional comprehension activities. Week 1 includes a proficiency test and pre-survey on confidence, followed by weekly ELSA practice. Week 16 features a post-survey and ELSA assessment comparing baseline gains. Data include Common European Framework of Reference (CEFR) baselines, pre- and post-surveys, and scores in pronunciation, fluency, and grammar. This study documents CBI–ELSA integration for STEM majors, quantifying improvements through ELSA’s visualization, targeted feedback, and dashboards showing results with greater accuracy.

An Integrated AI Analysis of Grammatical and Lexical Patterns with Feedback in Academic Writing #4645

Time Not Set

This presentation examines the potential of the free version of GPT-4o as an AI-assisted tool for cross-textual error analysis in EFL academic writing. The dataset consisted of ten academic essays of approximately 600 words each written by second-year undergraduate students enrolled in an elective English course at a public university in Japan. The essays, focusing on disease-related risk factors, were analyzed collectively using structured prompts designed to elicit systematic error categorization, frequency reporting, and simple feedback. The analysis identified error types widely documented in SLA research on Japanese learners; however, article misuse accounted for approximately 33% of all identified errors, followed by preposition errors (18%) and subject–verb agreement errors (15%). Rather than discovering new categories, the study evaluates whether ChatGPT can rapidly aggregate patterns across multiple texts, quantify their relative distribution, and prioritize high-impact errors for instruction. From a CALL perspective, the primary contribution of this study lies in instructional mediation. Through the use of AI for common writing error detection and correction, teachers can spend more time focusing on deeper dimensions of writing, such as the clarity of arguments, coherence, and logical structure.

AI-Mediated Revision and Reflective Writing #4649

Time Not Set

Generative AI writing tools are increasingly used by university students, and their pedagogical potential in EMI classrooms is often discussed in relation to language accuracy and proficiency development. This presentation reports on the use of an AI-based writing and revision workspace in an undergraduate EMI class that emphasizes content explanation rather than English instruction. In this class, students first read course texts and produced short written explanations interpreting the author’s main argument. They then used the AI workspace to reorganize or reframe their texts. Rather than focusing on grammatical correction, students compared their original and AI-revised versions to examine shifts in meaning, argumentative emphasis, and stance. The AI workspace was used as part of the revision process, with reflection centered on textual changes rather than tool usage itself. A brief pre–post reading task explored whether students became more attentive to stance and argumentative emphasis after the activity. Classroom observations and student reflections suggest that AI-mediated revision made interpretive changes more visible, prompting students to reconsider how clearly and faithfully they represented the author’s position. The presentation discusses how such activities may inform assessment practices by foregrounding interpretive judgment and stance awareness as observable components of reading development.

Implementing Real-time, Rubric-based Peer Assessment in EFL Presentation Courses: Evidence on Alignment, Comment Quality, and Perceptions #4654

Time Not Set

Peer assessment can improve learners’ performance (Double et al., 2020), yet its impact may be reduced when feedback is delayed (Shute, 2008) and peer comments vary in quality (Topping, 1998). This paper reports a PhD pilot of a low-cost workflow that collects rubric-based peer feedback during academic presentations and provides an immediate consolidated report. In two university EFL presentation courses in Japan, students (N=89) used Google Forms/Sheets to rate peers on an analytic rubric and add comments; the instructor completed the same rubric as a benchmark.

Data include peer and teacher scores, peer comment texts, and a post-task questionnaire. Analyses examine peer–teacher alignment by criterion, comment specificity/usefulness, and student perceptions of usability and fairness. Results show high participation and rapid feedback delivery. Peer scores were generally lenient, with the largest gaps on language accuracy and delivery, and closer alignment with the teacher’s ratings was associated with more specific, actionable suggestions.

The presenter will also outline future research directions integrating GenAI support as a Trainer (to develop higher-quality, criterion-referenced feedback) and a Synthesizer (to consolidate peer inputs into actionable reports) (Zhan et al., 2025). The session will benefit language teachers and CALL researchers seeking scalable speaking assessment and peer feedback practices.

Arizona AI: A Longitudinal Study of AI-based AWCF Outcomes in Japanese EFL #4682

Time Not Set

This poster reports final results from a 2025 Panasonic Education Foundation–funded longitudinal study examining the sustained impact of Arizona AI, a researcher-developed AI-mediated automated written corrective feedback (AWCF) tool on English paragraph writing among 120, 2nd-grade Japanese high school students. The study represents the conclusion of interim findings previously presented at the 2025 JALT International Conference in Tokyo, extending earlier analyses to a full instructional cycle.

Using repeated baseline–midline–endline measures, the study analyzed changes in human-rated writing performance across multiple rubric categories, including structure, grammar, transitions, and sentence complexity. Results show steady improvement over time, with the strongest gains occurring when AI feedback was embedded within a structured classroom workflow rather than used as a stand-alone intervention.

The findings speak directly to the conference theme “Prevail or Fail?” by identifying a central risk in CALL adoption: AI feedback systems that succeed technically but fail pedagogically when learner scaffolding and feedback literacy are insufficient. Implications are discussed for sustainable AI integration in EFL writing instruction, rubric-aligned prompt design, and teacher mediation strategies that enable AI feedback to prevail beyond novelty effects.

Learning with AI: How AI tools influence learner autonomy in EFL contexts #4689

Time Not Set

Artificial Intelligence (AI) has increasingly influenced various aspects of education, including the learning of English as a Foreign Language (EFL). With the increasing implementation of AI tools, such as ChatGPT, Chatbots, and Grammar Checkers, in EFL learning, learners’ engagement with language practice and self-directed study has evolved. This research explores the impact of AI applications on learner autonomy in EFL contexts, focusing on both their facilitating and constraining roles. The study was conducted with undergraduate EFL students at a Vietnamese university who regularly used AI-assisted tools to support their English learning. Based on a mixed-methods approach using data from questionnaires, learning logs, and semi-structured interviews, the results indicate that AI enhances learner autonomy by enabling independent practice, supporting self-assessment, and providing immediate feedback. However, the findings also suggest potential challenges, including over-reliance on AI-generated content and reduced critical engagement with learning tasks. In addition, issues related to students’ digital literacy and the responsible use of AI tools emerged during the research process. These results highlight the importance of guiding EFL learners to use AI tools critically and effectively, so that technology can support the development of sustainable and lifelong learner autonomy.

LingoLesson: A Platform for Authentic Speaking Practice and Teacher-Led Assessment #4481

Time Not Set

This presentation introduces LingoLesson, a new platform designed to address three pressing challenges in language education. First, as agentic AI browsers increasingly enable students to complete traditional blended-learning tasks with minimal engagement, LingoLesson preserves learner authenticity by requiring genuine spoken and recorded interaction. Second, because oral production normally leaves no record, it can be difficult for teachers to track progress, assess performance, or identify when students slip into their L1. LingoLesson solves this through video and audio submissions that are automatically transcribed and organized for easy review. Third, LingoLesson uses AI to assist rather than replace teacher decision-making, keeping educators at the center of lesson design and assessment. The session will demonstrate how the LingoLesson Editor allows teachers to create rich multimedia lessons, maintain full agency over sequencing and content, and convert speaking tasks into visible, trackable learning. Participants will gain insight into how the platform supports communicative competence, enhances classroom practice, and fits into existing curricula. In short, LingoLesson offers a practical, classroom-ready solution for modern language programs. Think of it as a cross between Flip and Google Forms, purpose-built for language learning.

Eigo.AI: Assisting and Enhancing Human Language Learning #4482

Time Not Set

AI should assist and enhance human-centered language teaching and learning, not replace it. Established principles in second language acquisition remain central: learners need sustained exposure to comprehensible input, meaningful opportunities for output and interaction, structured fluency development, and timely feedback. Eigo.AI is designed around these principles. The platform offers a comprehensive library of AI-generated, human-proofread lessons across proficiency levels. Each lesson integrates listening, reading, speaking, and writing through seven structured activity types, ensuring balanced skill development. Students receive immediate feedback on pronunciation, writing, and discussion performance, while teachers maintain full oversight through detailed tracking tools.

Eigo.AI also addresses a practical constraint in most programs: time. Reaching advanced proficiency typically requires more than 2,500 hours of focused study, far beyond what classroom instruction alone can provide. The platform extends structured learning beyond class hours while keeping progress visible and measurable. Teachers can monitor engagement, review student output, and intervene when needed. In this way, AI supports informed instructional decisions without displacing teacher expertise. This session will demonstrate how Eigo.AI combines established pedagogy with practical implementation, offering institutions a scalable and classroom-ready solution that strengthens learner development and preserves teacher agency.

A One-stop AI-powered Solution for Personalized Cantonese Learning and 1-to-N Immersive Oral Training Integrated Systems #4485

Time Not Set

The presentation is about an innovative initiative aimed at making Cantonese learning more accessible and effective for diverse learners in Hong Kong, the Greater Bay Area and worldwide. Integrating advanced AI technologies with user-friendly digital platforms, the project supports non-local and local students, professionals, and residents in developing Cantonese skills for academic, social, and professional success. The project comprises two main components: iCanLearn, an AI-driven self-learning platform tailored for native Putonghua speakers and other learners, offers interactive lessons on pronunciation, vocabulary, and grammar, with instant feedback and a smart chatbot for real-life conversational practice. CanNTalk, the second component, is an AI-powered oral training platform that simulates group discussions with multiple AI avatars, reflecting varied personalities and communication styles to mimic authentic social and workplace scenarios. This immersive approach builds learners’ confidence and fluency in spoken Cantonese. It is expected that these platforms form a “learn-and-train” ecosystem, enabling seamless transitions between learning and practice, while providing teachers with insights for personalized support. The flexible design allows use in universities, corporate training, and by the general public, amplifying its positive impact across the community.

AI-Mediated Feedback and Draft Revision: A Classroom-Based Study of IELTS Task 1 Writing #4495

Time Not Set

This classroom-based study examines the effectiveness of AI-mediated feedback in supporting draft revision in IELTS Academic Writing Task 1 (table description). The study aims to determine whether structured ChatGPT-assisted revision leads to measurable improvements in grammatical accuracy, lexical resource, and the use of linking words. Fifty third-year English-major students produced an initial draft independently and subsequently revised their texts using guided prompts to obtain AI-generated feedback. The study adopted a within-subjects design. Draft 1 and Draft 2 were compared across grammatical accuracy, lexical resource, and linking devices. A post-task questionnaire was used to explore students’ perceptions of AI-supported revision, including perceived usefulness, confidence, and concerns about over-reliance. Quantitative findings indicate reductions in grammatical errors and increased lexical variation and use of linking expressions in revised drafts. Survey responses suggest improved confidence in language use, although some participants reported concerns about potential overreliance on AI suggestions. The study highlights the pedagogical potential of AI-mediated feedback while emphasizing the need for guided use and feedback literacy in CALL-based writing instruction.

Rethinking Final Exams: Exploring Online Assessment to Reduce Cheating and Support Critical Thinking #4509

Time Not Set

Paper-based final exams remain common in many higher-education EFL contexts of Uzbekistan, yet they are often associated with high levels of cheating, extensive marking time, and frequent post-exam grade disputes. These challenges raise concerns about fairness, teacher workload, and the limited opportunities for students to demonstrate meaningful language use.

This poster presents an ongoing classroom-based project that explores the use of online final exams as an alternative to traditional paper-based assessment. The main aim is to examine whether online assessment can reduce opportunities for cheating, promote critical thinking, and make the final exam process more manageable for teachers. The project is situated in a university EFL context where final exams are being redesigned from paper-based tests to online, open-ended, task-based assessments.

Final exam tasks focus on real-life issues and require students to analyze problems, express opinions, and propose solutions using English. Exams are conducted in an open-book format to discourage memorization. An analytic rubric covering task achievement, critical thinking, language use, and organization is shared with students in advance. AI-supported tools assist with feedback and language analysis, while teachers retain control over final grading.

The poster discusses early observations, challenges, and practical implications for CALL-based final assessment in higher education.

AI-Mediated Feedback in L2 Writing: Evidence from a Classroom-Based Complexity Study #4514

Time Not Set

Interest in generative AI has prompted debate about its role in second language (L2) writing instruction, particularly in relation to feedback practices. While prior research has focused on learner perceptions (Sun & Mizumoto, 2025) or global writing quality (Muñoz, Nassaji, & Bello Carrillo, 2025), relatively few classroom-based studies have examined how AI-mediated feedback affects specific dimensions of syntactic development.

This study reports findings from a 15-week quasi-experimental study conducted in a Japanese university EFL writing course, focusing on sentence-level syntactic complexity. Students completed timed writing tasks on identical topics throughout the semester and were assigned to one of four conditions: (1) AI-assisted revision using a custom-designed feedback tool, (2) instructor-provided feedback, (3) self-correction, or (4) a no-feedback control group. Syntactic complexity was measured using established indices, including mean length of sentence (MLS), mean length of T-unit (MLT), and mean length of clause (MLC).

Results showed a significant interaction between time and group for MLT, with the AI-assisted group demonstrating greater gains in sentence-level elaboration than the instructor-feedback and control groups. No comparable advantages were observed for subordination-based measures. These findings suggest that AI-mediated feedback can function as a targeted post-instructional scaffold supporting specific dimensions of syntactic development in L2 writing.

Automatic Corrective Feedback on L2 Speaking: A Systematic Review of CALL Research #4516

Time Not Set

This study reports on an ongoing systematic review of empirical research on automatic corrective feedback for second language (L2) speaking in CALL contexts. As AI-mediated and ASR-based feedback tools are increasingly integrated into speaking practice, research findings remain dispersed across feedback designs, outcome measures, and instructional settings. In addition, due to the rapidly evolving nature of language learning technologies, there is a continual need for up-to-date research syntheses. Following PRISMA-informed procedures, an initial search was conducted to identify studies examining automated feedback systems targeting L2 speaking. Preliminary screening has identified a subset of studies addressing speaking-related outcomes, with most research focusing on overall speaking proficiency, grammar, and pronunciation. In contrast, fluency-related outcomes appear to be comparatively underexplored. Studies are being coded for feedback modality, targeted speaking construct, learning context, and outcome domain, including both linguistic development and learner-related variables such as motivation, confidence, and willingness to communicate. By synthesizing methodological patterns and reported outcomes, the study aims to identify gaps and design challenges in current CALL research on automatic speaking feedback. The findings will inform CALL researchers and practitioners about how automatic feedback is currently operationalized and highlight directions for future research on technology-mediated support for L2 speaking.

Developing AI-Enhanced Language Learning Web Applications Through No-Code Platforms #4523

Time Not Set

This presentation introduces AI-supported web applications developed through no-code platforms to enhance English language learners’ productive and receptive skills. No-code environments enable educators and researchers to design and deploy interactive applications without programming expertise, opening new possibilities for computer-assisted language learning (CALL) and meaningful AI integration into pedagogical contexts. The project utilizes Vercel’s v0 platform and Google AI Studio for rapid prototyping of AI-powered learning applications. The current prototype allows learners to summarize or express opinions on short news articles through speech or text input. Additionally, the project includes developing a web application for conversation practice with an AI chatbot on specific news articles with feedback. AI automatically evaluates learner’s responses based on semantic relevance, lexical and grammatical accuracy, and—for audio input—pronunciation and fluency, providing individualized feedback to enhance their comprehension and production abilities. Key objectives include exploring how AI feedback can complement traditional classroom instruction through timely, adaptive responses. The development process demonstrates how no-code tools enable educators to focus on instructional design and assessment logic rather than programming syntax, democratizing access to advanced educational technology. The presentation addresses critical implementation challenges including data security, model accuracy, and feedback reliability.

Helping Learners Notice Their Speech Through AI Video-Synchronized Feedback #4526

Time Not Set

When practicing speaking with a practice partner, language learners rarely remember what they actually said, or how they said it, after a session ends. Without this recall, feedback loses context. To address this, the researchers developed Pecha, a web application that records learners speaking via webcam, transcribes their speech with speaker diarization, and generates AI feedback across customizable categories like grammar and cohesion. What sets Pecha apart is that feedback links directly to video timestamps, so learners see replays of the exact moments errors occur. Combined with both written and spoken advice, learners are able to reflect on and improve their speaking skills. This design draws on Schmidt's Noticing Hypothesis, which holds that learners must consciously attend to linguistic features for acquisition to occur. Timestamp-synchronized feedback makes errors salient in a way that delayed correction cannot. The approach also aligns with Video-Stimulated Recall methodology, where reviewing recorded performance promotes deeper reflection and self-correction. This session will demonstrate the application, discuss its theoretical basis, and share early observations from use at universities in Japan and the United States, with students practicing English and Japanese respectively. Attendees will leave with practical ideas for integrating AI-assisted feedback into autonomous speaking practice.

Learners’ engagement with the affordances of dialogic multimodal feedback for university students' L2 academic writing: An embedded single case study in the Vietnamese context #4565

Time Not Set

Given the potentials of dialogic multimodal feedback (DMF) for L2 writing and the lack of research about students’ engagement with DMF, this study was conducted under the lens of Affordance Theory to explore how university students engage with the affordances (learning possibilities) of DMF for their L2 academic writing and the factors influencing their engagement with DMF (screencast feedback with dialogic features such as commenting or reacting). This case study took place in a class of 31 students studying an EMI course involving students’ L2 writing activities at a Vietnamese university. Three major data collection instruments included a student questionnaire, student in-depth interviews, and interaction logs with DMF. Findings reveal that students actively engaged with various affordances of DMF behaviorally, cognitively, agentively, and emotionally in different ways, such as rewatching, adjusting speed, and proactively interacting with DMF by asking questions. Emotional engagement is both a prominent form of engagement and a mediating factor, interacting with other engagement dimensions. Interestingly, though most students showed active perceived engagement with DMF, the interaction logs revealed certain dissimilarities in students’ actual engagement with DMF. Finally, cultural factors and learner differences are two potential factors influencing students’ engagement with DMF, especially in Vietnamese collectivist culture.

Moving towards pedagogy of care: A case study of Vietnamese university students' emotional engagement with interactive video feedback for L2 academic writing #4566

Time Not Set

Since teaching is an emotional labor, understanding the affective dimension of student engagement with digital feedback is essential. Following the pedagogy of care, this study investigated students’ emotional engagement with interactive video feedback (IV-feedback) for L2 academic writing (screencast feedback allowing students to directly interact with the videos via commenting or reacting features). This case study was conducted in an EMI class of 31 students learning an EMI course involving L2 academic writing activities at a Vietnamese university. Three main data collection instruments included a students’ questionnaire, students’ in-depth interviews, and teaching journals. The findings show that IV-feedback allows learners and lecturers to embrace the pedagogy of care in a culturally responsive way. Most students expressed more positive emotions with IV-feedback than text feedback (mostly thanks to exposure to teacher tone, voice, and social presence). Notably, students felt safer and more willing to interact with IV-feedback than face-to-face feedback, as IV-feedback creates a safer dialogic space to raise questions, especially in Vietnamese collectivist and face-saving culture. These positive emotions are perceived as motivation to revise their writing, potentially leading to improved L2 writing skills. However, some students also expressed certain negative emotions, suggesting the need for careful implementation of IV-feedback.

Digital Peer Evaluation to Support Reflection and Motivation in Performance Tasks #4573

Time Not Set

This study explores a CALL-based approach to peer evaluation in a Japanese public (junior and senior) high school EFL context. In performance tasks such as presentations, peer evaluation has traditionally relied on paper forms, largely to ensure that the student audience pays attention. In practice, these forms are often collected and set aside. Limited class time and logistical difficulties make it challenging to return peer comments to students, even when many appear to invest genuine effort in it.

Peer evaluation was redesigned using Google Forms distributed through Google Classroom. Moving to a digital platform allowed feedback to be easily aggregated and returned as individualized evaluation summaries. Although teacher involvement in managing the process remains necessary, the time required was substantially reduced.

Classroom observations suggest that access to compiled peer feedback may support formative assessment by providing concrete information students can use for reflection and improvement. Student perceptions of peer evaluation are examined through survey data. Results indicate that both conducting peer evaluations and receiving compiled feedback helped most students improve their subsequent performances and increased their motivation. A majority also expressed a preference for digital over paper forms. Implications for CALL-supported feedback practices in secondary EFL settings are discussed.

Beyond the Output: Scaffolding Critical Engagement with GenAI across two Academic English Assignments #4580

Time Not Set

This presentation reports on a four-stage pedagogical intervention where 50 EAP students collaborated with generative AI across two assignments. Critical engagement was fostered through: (1) L1/L2 drafting without AI; (2) using AI for ideas/language while preserving personal voice through review; (3) soliciting AI feedback via a teacher-designed prompt aligned with rubric criteria; and (4) producing a final draft after critically reviewing AI feedback. Moving beyond linguistic outcomes, this study examines how students navigated human-AI collaboration, drawing on Ng et al. (2021)’s four-aspect framework of AI literacy (know and understand, use and apply, evaluate and create, and ethical issues). Data include students’ written assignments, stimulated recall interviews with seven student groups stratified by AI feedback solicitation patterns and grade, and students’ evidence-based reflections on future AI use intentions. This longitudinal design tracks how initial experiences shape evolving AI literacy. The presentation argues that "success" in CALL must be redefined—shifting from measuring polished products to valuing the development of AI literacy dimensions: prompt crafting, evaluative judgment, agential integration, and ethical awareness.

Reference Ng, D.T.K., Leung, J.K.L, Chu, S.K.W., & Qiao, M.S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041

AI-Mediated Writing Behaviors and the Development of Metacognitive Revision Skills in the L2 Writing in higher Education #4585

Time Not Set

The rapid development of Artificial Intelligence (AI) tools in recent years has substantially transformed learners’ writing behaviors in English as a Foreign Language (EFL) contexts, shifting the focus from traditional linear drafting to recursive, AI-mediated writing processes. This emerging workflow involves iterative text revision, idea composition, AI-assisted consultation, and critical acceptance or dismissal of algorithmic suggestions. While these evolving behaviors highlight AI’s role in enhancing lexical output and reducing cognitive load, the long-term impact on learners' metacognitive development remains a critical gap. In this study, I investigate how these AI-mediated writing behaviors influence learners’ meta-cognitive agency, specifically focusing on self-correction behaviors as an indicator of cognitive engagement and modification patterns.

To examine the effect of the AI-mediated writing process on students’ metacognitive revision of argumentative essay writing, I adopted a mixed-methods approach to explore learners' perceptions and their writing outcomes. The findings reveal the cognitive mechanisms underlying AI-assisted correction in this writing task, offering pedagogical insights into the evolution of human agency and automated feedback in the EFL classroom. The challenges and the implications of utilizing AI in EFL writing classrooms will be further discussed.

Image Generators as Feedback for Descriptive Writing: Successes and Challenges #4586

Time Not Set

Advances in AI-powered photorealistic image-generating tools provide the opportunity to make descriptive writing visible to students and teachers alike. EFL students often struggle to write detailed descriptions because of a lack of opportunity to write for an audience, delay in feedback from instructors, or an overabundance of caution. This practice-based presentation describes one instructor’s attempt to use Adobe Firefly to improve student writing on an academic writing course at a private Japanese university based on an idea from Warner (2024). Using stock photographs from Unsplash, 74 students were tasked with describing a target image in four writing and feedback cycles. The short feedback cycle enabled the students to notice, with more precision, how their own writing had been interpreted. The task succeeded in providing motivation for the participants, and most students wrote increasingly detailed descriptions. However, the AI image generators failed to help students with certain aspects of grammar, spelling and adjective word order. Additionally, for future success, custom-made programmes may be needed to prevent rewarding generic descriptions over specific and accurate ones. This presentation will interest those who wish to use AI to improve student writing, and feedback on how to develop the idea will be welcomed.

How Students Prompt AI in EFL Writing: Evidence from a Japanese University Course #4600

Time Not Set

Generative AI tools such as ChatGPT are increasingly used in university EFL writing classes, but teachers need clearer guidance on how to help students use AI in ways that support learning without replacing student writing. This six-week classroom-based study examined how students phrase requests to AI for writing help in a first-year Japanese university EFL academic writing course (N = 40). Students completed a process-writing assignment, timed writing tasks without AI in Weeks 1 and 6, and brief surveys. During AI-permitted stages, students submitted short prompt logs documenting their ChatGPT interactions. Analysis showed that students most often used AI for language-level support, especially grammar correction, vocabulary help, and translation. Higher-order prompts related to organization, explanation, and evaluation were less frequent but associated with deeper revisions and clearer understanding of appropriate AI use. Requests for text generation were rare but aligned with greater uncertainty about acceptable use. These findings suggest that the educational value of AI depends less on tool access and more on how students are guided to prompt it. This session will interest EFL writing instructors seeking practical, classroom-based guidance on shaping student AI interactions to support learning.

GenAI vs. Human Feedback on L2 PhD Students’ English Academic Writing #4605

Time Not Set

Generative artificial intelligence (GenAI) is transforming the L2 writing landscape, particularly in feedback provision. While some studies report positive effects of GenAI feedback on formal aspects of undergraduate writing, few have examined its impact on the academic writing of postgraduates. Addressing this gap, the present study investigates how GenAI- and human-generated feedback affect L2 doctoral students’ English writing. By analyzing scores for two versions of academic paper abstracts (N=71) and comparing the quantity and quality of feedback, the study found: First, both forms of feedback led to overall improvements in writing scores. Second, AI feedback tended to focus on sentence-level issues, whereas human feedback addressed more discourse-level concerns. This distinction was especially evident in the treatment of the second and third of the five major rhetorical moves in abstract writing: identifying the research gap and stating the research aim. Third, AI feedback was typically affirmative, and instilled confidence in revisions, while human feedback often consisted of questions, hints, and suggestions that served as pedagogical guides, fostering critical thinking and learning throughout the writing process. The findings reaffirm the essential roles of both human involvement and modern technologies in enhancing the quality of English language education, underscoring their pedagogical significance.

Limits and Challenges of AI-Assisted Academic Writing Revision Among Medical Students #4607

Time Not Set

AI tools have been promoted as effective supports for second language academic writing, yet classroom-based evidence suggests that their educational impact depends on instructional design. This study reports preliminary findings on how first-year students at a private medical school in Japan used ChatGPT while revising a scientific research paper in an English academic writing course. Students revised the Introduction of an Introduction-Methods-Results-Discussion-formatted paper using ChatGPT and submitted their conversation histories. Example prompts, written in English, were provided to encourage structured analysis and editing rather than direct rewriting. Ninety conversation histories were analyzed to identify patterns of prompt use, and informal interviews supplemented interpretation. Although most students revised their writing, nearly two-thirds struggled to use the example prompts as intended. Common issues included partial prompt use, combining multiple writing tasks in a single prompt, or abandoning the provided prompts in favor of short, general instructions. Interviews suggested that English-language prompts, perceived prompt length, and limited understanding of how prompts function contributed to these outcomes. Overall, the findings suggest that weakly guided AI integration may reinforce surface-level revision behaviors and prompt design must account for learner proficiency, language preference, and AI literacy if AI tools are to support academic writing development.