Sessions / Presentation
Decreasing Anxiety for Increased Willingness to Communicate during COIL Interactions #4608
The presenters have been Collaborative Online International Learning (COIL) partners for over eight years, providing their Japanese and Taiwanese university students with opportunities to engage in meaningful intercultural dialog with each other. They have created asynchronous interactions (e.g., social media posts), out-of-class synchronous communication (e.g., interviews), out-of-class business-oriented projects (e.g., developing a new product and making a commercial to sell it), and synchronous class-to-class communication. After each interaction, the students reflected on their exchanges. The presenters have consistently found that discomfort in speaking with new interlocutors (i.e., “I’m shy.”) hinders Willingness to Communicate (WTC), and the presenters’ experiences have further reinforced the importance of creating a sense of closeness to decrease these barriers.
The presenters will briefly review the format of the interactions and potential communication barriers when students interact in COIL settings. They will reference these concepts as they discuss their written data (i.e., Japanese students’ weekly reflections, final reports, end-of-semester questionnaires) and verbal data (i.e., Taiwanese and Japanese students’ verbal feedback and discussions after the interactions). This interactive presentation will conclude by focusing on activities and procedures that helped decrease students’ anxiety and increase their WTC.
Unlocking Office365: A Teacher-Friendly Graph API Pipeline for Exporting Student Work at Scale #4609
Teachers rely on Microsoft Teams and OneNote to collect student writing, reflections, and project work, yet extracting that work for grading, feedback, or research is often burdensome. This presentation demonstrates a privacy-conscious “Diary Pipeline” that uses the Microsoft Graph API to export OneNote pages into a CSV/XLSX schema with consistent headers and rich metadata, including page and section, student ID, timestamps, photo URLs and counts, diary text content and statistics, feedback fields, vocabulary entries.
Attendees will see how a single class notebook can be exported quickly, then reused for multiple purposes: faster grading and rubric scoring, portfolio content, longitudinal tracking, and formative feedback or LLM-assisted review. Student diaries are used as a model for varied forms of data export. The focus is on non-invasive data gathering that lightens the burden on both instructors and students.
Re-examining AI-Generated Educational Content: Accuracy, Validity, and Practical Use #4612
Many educators utilize AI-tools to generate classroom materials quickly and efficiently. AI-created readings, worksheets, and listening tasks can provide level-appropriate input rich in target grammar and vocabulary, particularly in ESL contexts where teachers often adapt informational texts for learners. However, the increasing use of such materials raises an important question: how trustworthy is AI-generated educational content - and has the quality changed with recent updates? Research shows exposure to inaccurate information can lead learners to internalize false knowledge through processes such as source confusion and the illusory truth effect. If AI-generated materials contain subtle errors, students may unintentionally learn misinformation - particularly problematic for medical and science students. This study revisits and extends earlier work on the value of AI-generated classroom materials. Subject-matter experts, including doctors and professors, first identified topics within their disciplines, after which reading texts were generated by multiple AI models and evaluated. The consultants assessed the materials for factual accuracy, pedagogical suitability, and potential risk for use as simplified informational texts for language learners. This follow-up study offers insights into when AI-generated materials support learning - and when they undermine it. The findings aim to help educators use AI tools more critically when developing instructional materials.
Can ChatGPT Grade Like a Human? Examining the Reliability and Validity of AI-Assisted Assessment in Academic Writing #4613
Recent research has explored the use of generative AI for writing assessment, yet evidence regarding its reliability and validity remains mixed. This study examines whether ChatGPT (GPT-4o) can function as an analytic assessment tool for source-based academic essays written by postgraduate research students. A dataset of 122 essays, originally scored by two experienced human raters, was reevaluated by ChatGPT using a standardized analytic rubric (e.g., Idea Presentation, Academic Style, Citation, and Mechanics) and a zero-shot prompting approach. Non-parametric analyses and descriptive statistics were used to examine score alignment, ranking patterns, and domain-specific differences.
Results show that while human and AI scores occupied a similar overall range, ChatGPT consistently awarded higher scores and did not rank essays in ways that aligned with human judgment. Significant differences emerged across most rubric domains: human raters scored higher on idea presentation, whereas ChatGPT assigned higher scores for academic style and citation practices; no significant difference was found for mechanics. Repeated AI scoring demonstrated high internal consistency, with variability concentrated in meaning-dependent domains such as argument clarity and source integration. Overall, the findings indicate that generative AI shows promise for reliable form-focused assessment but remains limited in evaluating rhetorical and conceptual quality.
Challenges and Opportunities of Using ChatGPT in English for Specific Purposes: A Vietnamese University Case Study #4615
This study examines ChatGPT as a supportive tool for an ESP lecturer teaching first-year business students (B2-C1 proficiency, majors in Banking and Finance, Accounting, Business Administration, Logistics and Supply Chain Management) at Foreign Trade University, Vietnam. The tech-supported ESP course was delivered primarily via Microsoft Teams. The mixed-methods case study explores opportunities and challenges of integrating ChatGPT in teaching methods and assessment design (e.g., diverse question generation, formative feedback), alongside its potential influence on student outcomes in CALL contexts. Data were collected from 81 students, including Microsoft Teams interaction logs and materials, lecturer observations and reflective notes, student reflections, an open online quiz (n=50), and a final exam (n=50). Descriptive statistics showed quiz mean scores of 18.86/30 improving to 37.46/50 on the exam (average 37.2% gain). Qualitative findings highlighted ChatGPT's role in bridging content gaps, fostering critical thinking, and enabling personalized support, but also concerns regarding overreliance and ethical issues such as balanced use and academic integrity. While preliminary results suggest thoughtful integration of ChatGPT can enhance ESP instruction, the absence of control groups limits causal claims about learner progress. Future iterations will incorporate comparative cohorts. This case offers practical insights for practitioners navigating AI's opportunities and limitations in classrooms.
A Framework for Semester-Long AI Integration in Entry-Level Spanish Instruction #4617
This presentation reflects on a semester-long implementation framework for embedding three custom-built AI tools into SPA001, the entry-level Spanish module for absolute beginners at Xi’an Jiaotong-Liverpool University. TAR-íA (a practice generator), AI-migo (a virtual tutor responding to queries and providing writing feedback), and XiSPA (an AI-driven conversational partner) were introduced from Week 3 and updated throughout the semester to align with curricular progression. Building on an earlier iteration where the tools were introduced only at semester’s end, this implementation embedded them systematically throughout the course, positioning AI as an integral component of classroom instruction rather than as isolated classroom activities or supplements for autonomous practice. Following a brief tool overview, the presentation details the rationale and sequencing for introducing each tool, the main activity types conducted, and how tool functionality was aligned with curriculum and beginner-level constraints. It offers critical practitioner reflection on challenges encountered—including time constraints and managing occasional inaccuracies—alongside observations from classroom practice and areas for refinement. This presentation documents and reflects on practice-based implementation decisions to inform future iterations and offer educators transferable insights into embedding custom AI tools as integrated components of beginner language curricula.
Designing a Short-Cycle Virtual Exchange for Deep Learning: A Three-Phase Pedagogical Model #4618
This presentation introduces a three-phase model for designing a short-cycle Virtual Exchange (VE) that supports meaningful learning within limited time frames. The model is based on a VE project conducted between Japanese and Ukrainian university students in May-June 2025. Thirteen Japanese students enrolled in an Intercultural Communication course and thirteen Ukrainian students in a Second Language course participated in two synchronous sessions and collaborated online in small groups to complete a joint project. The collaborative task involved analyzing language used in media reporting on a current international conflict. Students examined how tone, lexical choices, and framing shape audience perspectives of violent events recorded through social media. Through this process, participants developed awareness of how linguistic choices influence their interpretation of conflict-related narratives. Drawing on analysis of student surveys, project work, and both immediate and six-month post VE feedback, a three-phase pedagogical model was developed consisting of: (1) foundational knowledge and guided analysis instruction, (2) autonomous collaborative project work, and (3) reflective synthesis and sharing. The presentation will demonstrate how this structural approach supported both skill development and reflective engagement during a four-week exchange, and how the experience continued to influence students' perspectives even after 6-months post-Virtual Exchange.
Beyond the Summary Trap: Reframing Literacy and Competency in AI-Mediated Language Learning #4620
In the era of large language models (LLMs), Computer-Assisted Language Learning (CALL) has shifted from tool-use to complex human-AI partnership. However, anecdotal evidence highlights that the prevalence of fragmented AI outputs—the "summary trap"—poses a significant threat to critical literacy and deep cognitive and emotional engagement. This talk outlines a pilot study in which a dual-framework approach to AI integration in higher education, specifically focusing on university contexts in the Philippines and Japan. By distinguishing between AI Literacy (metaliteracy and critical discernment of algorithmic bias) and AI Competency (the strategic orchestration of LLMs within a Human-in-the-Loop workflow), this research explores how learners can maintain language learner agency. Integrating perspectives from Human-Computer Interaction (HCI) and Cultural Anthropology, this talk explores the local concepts of Ginhawa (Philippines) and Ikigai (Japan) as ethical anchors for digital mindfulness, highlighting the necessity of "whole-text" pedagogy to combat cognitive narrowing, ensuring that language learners develop the analytical thinking and resilience required in a generative AI labor market.
Giving oral feedback on recorded presentations using Loom #4621
Loom is a video messaging tool with screen recording capabilities. They offer a free account for educators, with the capacity to record simultaneously from your camera and screen. In education, teachers can utilize it in various ways, including providing authentic recorded feedback on their students’ oral presentations or recorded presentations. In my teaching context, students are generally of low language ability, despite taking my Elective Communicative English class. They often get very nervous about short speaking tests and often blank out. I tried a new test format, having them record their speaking. I then play their recording on my screen, stopping their recording to give authentic feedback. I then send them the link to the video. Students reported feeling less stressed. They also liked the individual feedback on pronunciation, their use of English, and the content. In this presentation, participants will be introduced to the basics of Loom, how to make a free educator account, and an example of how I used Loom. Other possible uses of Loom will also be discussed. It is hoped that participants will benefit from the session by discovering a new digital tool that could be helpful in their teaching contexts.
Investigating the Effects of Digital and Traditional Storytelling on Thai EFL Learners’ Speaking Ability, Attitudes, and Speaking Anxiety #4622
Storytelling in EFL is usually defined as the practice of telling a story for language learning purposes, like grammar and vocabulary acquisition or for the development of listening and speaking skills (Bland, 2015). A subset, digital storytelling, is usually defined as the combination of traditional narrative with electronic multimedia tools like videos (Bull and Kajder 2005; Porter 2005; Rule, 2010). The purpose of this study was to examine Thai EFL students’ English speaking ability, attitudes, and speaking anxiety after the implementation of digital storytelling and storytelling activities in class. Sixty-four first-year non-English major students at Rajamangala University of Technology Krungthep, Thailand, participated in the study and were divided into two groups: digital storytelling and storytelling. A pre- and post-speaking test, attitude questionnaires, and speaking anxiety questionnaires were used as research instruments. The findings revealed that students’ English speaking scores significantly improved after the implementation of both storytelling activities. In addition, students demonstrated positive attitudes toward both instructional approaches. The results also showed that storytelling (x̄ = 2.09, SD = 0.34) and digital storytelling (x̄ = 3.35, SD = 0.30) helped reduce students’ speaking anxiety.
AI-Supported Contextualised Scenario Practice in Tourism English: Learning Outcomes and Learner Feedback in an ESP Course #4623
Technology-enhanced education enables more interactive, personalised language learning. This project examined how GenAI (e.g., ChatGPT-4o) supported university students without a tourism background in an ESP Tourism English course. Although academically strong, they were novice to tourism discourse and workplace routines, creating a mismatch between course expectations (service-oriented interactional competence) and prior textbook-driven learning with limited oral rehearsal. This gap lowered perceived relevance, increased speaking anxiety, and led to uneven role-play participation. Therefore, the intervention targeted motivation, learner autonomy, and listening–speaking performance for tourism-service encounters. Over 18 weeks, students completed AI-supported brainstorming, itinerary planning, collaborative tasks, and scenario-based conversational practice in workplace simulations, producing recordings and task outputs. The design provided scalable, repeatable interaction practice with immediate language support beyond what teacher feedback alone can offer in limited class time.
Qualitative triangulation of student artefacts, recordings, AI dialogue logs, and reflections indicated improvements in four areas. Students demonstrated stronger oral expression and situational responsiveness, greater appropriacy in service interactions, higher motivation due to reduced speaking anxiety, and improved AI literacy for revising language and organising information. Student feedback also noted vocabulary gaps and the need for clearer AI-use guidance and a better balance between AI practice and human interaction.
The Revised Bloom’s Taxonomy Framework in EFL: Pedagogical Applications for Students’ Critical Thinking Development #4624
The rise of EFL students’ GenAI-like writings has raised concerns about their critical thinking abilities. The revised Taxonomy of Educational Objectives (Krathwohl, 2002) offers a recognized framework for cultivating those cognitive abilities. While existing papers focus on the use of GenAI tools and its connection with Bloom’s taxonomy, the critical gap remains in understanding how to enhance EFL students’ critical thinking such as evaluation and creation in the age of GenAI. Grounded in a socio-cognitive framework, this study examines how EFL students’ higher-order thinking develops through writing tasks when spontaneous student-instructor conversations - Interactive Oral Assessments (IOAs) guided by the revised taxonomy, are used. A focus group of ten second-year English majors in an EFL Writing course participated in this qualitative project. Data from entry and exit meetings, audio-recordings, semi-structured interviews, writing samples, AI chat history were analyzed. A finding suggests that the intervention of IOA positively influences the students’ ability to analyze and evaluate the AI-generated content as critical steps in scaffolding their writing assignments. Additionally, emphasizing higher-order thinking processes has increased original argumentations in student works. The study indicates IOA is a potential assessment tool to raise EFL students’ awareness on how to use AI critically.
Between Curiosity and Resistance: AI‑Assisted Writing Feedback Before Curricular Adoption #4625
Many language programs are debating whether and how to use AI-assisted feedback in writing instruction as these tools become common in higher education. This talk reports the preliminary phase of a larger project examining the pre-adoption landscape of AI feedback on student writing at an English-medium instruction university. We focus on students' informal AI practices before curricular integration, faculty beliefs, and institutional uncertainty. Guided by sociocultural theory, we use a qualitative-dominant mixed-methods design: semi-structured faculty interviews, institutional document analysis, and student surveys on out-of-class AI use for writing and feedback. Interview and document data are analyzed thematically and through discourse analysis; survey data are summarized with descriptive statistics. Preliminary results suggest distinct faculty stance profiles, widespread but uneven student use of AI-generated comments outside instruction, and significant discrepancies between the instructor's goals and students' actions. The study shows how AI becomes a contested mediating tool before formal guidelines are established and provides CALL researchers and practitioners with empirically grounded insight into the circumstances, conflicts, and moral issues that influence AI adoption, informing more feasible and pedagogically sound AI-mediated feedback interventions.
Caribbean Medical Students’ Engagement and Autonomy in Blended Language Learning #4626
This study reports on the preliminary findings of a qualitative case study exploring medical students’ engagement and autonomy in a blended Spanish language course in the English-speaking Caribbean. Data come from surveys, e-learning platform student activity reports, and in-depth interviews with volunteering students. The project spans two academic years (2023–2025) and includes responses from over 500 students at a university in Trinidad and Tobago, where a new institutional policy requires all students to complete a credit-bearing foreign language course. Engagement with the online course component is key to investigating student autonomy, as students often lack genuine interest in Spanish. The course is generally perceived negatively among medical students due to heavy core course workloads and the perceived irrelevance of Spanish. Preliminary findings, emerging from an analysis of online activity completion and thematic analysis of open-ended survey questions and interviews, suggest a high level of superficial engagement with online activities. Students who demonstrated deeper engagement had positive prior experiences with Spanish. The main pedagogical implication is the need to better promote learning beyond the classroom to encourage meaningful engagement with online materials.
A Corpus-driven Study on the Use of Multiword Units (MWUs) in Parental and Child Speech #4627
Multiword units (MWUs), recurrent word combinations frequently used in language, support fluent language production (Pawley & Syder, 1983) and play a crucial role in children’s language development (e.g., Arnon et al., 2017; Skarabela et al., 2021). Adopting a corpus-driven approach, this study investigates the distribution of high-frequency single words and MWUs in parental and child speech, drawing on the Warren Corpus (Warren-Leubecker, 1982, 1984) from CHILDES. The dataset comprises home interactions involving children (mean age of 64.7 months) and their parents. High-frequency single words and two-, three-, and four-word MWUs were extracted and compared. The results revealed that (1) in terms of the high-frequency single words, the strongest correspondence is found in pronoun use, and (2) a higher degree of overlap is observed in two-word MWUs, whereas little or no correspondence is found in three- and four-word MWUs. Overall, the correspondence decreases as the length of MWUs increases. From a usage-based perspective, children’s emerging MWU production is associated with shorter, high-frequency patterns in the input, while longer formulaic sequences develop more gradually. These findings underscore the importance of examining MWUs of different lengths and suggest the pedagogical value of shorter, high-frequency MWUs for early-stage learners in both first and additional language contexts.
CALL for Communication on Location #4628
How can computing devices develop communication? College students of English need to understand “communication” as spoken exchange on equal terms. Integrating devices can promote language-learning. Learning environments need diversifying for the particular set of students, course goals and personal expectations (Stockwell 2019). Course components include computer access, smart phones, editing programs, and meaningful locations. Mobile-phone video-shooting begins the process. A set of basic technological skills is a pre-requisite. Devising scenarios to generate communication requires co-operation and spontaneous interaction. Whatever locations are available to an instructor are potential learning environments. In this case, not only places on campus, but also locations within easy reach became dynamic motivators. To activate students' passive abilities they need motivating to communicate about each other’s immediate knowledge and experience in English. After in-class discussion and the preparation needed to achieve a productive atmosphere, various samples of the videos produced at locations in and around their campus will be shown to discuss the necessary steps and contents for success. Students become more proficient with proper editing skills, as well as captioning with AI or self-captioning to preserve valuable English interactions and enrich their learning experience. The choice is customizable to individual circumstances, but the template is universal.
Using NotebookLM as a Reflective Tool for Oral Fluency Development #4629
This presentation introduces a classroom-ready approach to using NotebookLM as a reflective tool to support the development of oral fluency in a university EFL context. Students record short audio journals several times per week, producing regular, low-stakes spoken output. They upload either a transcript or the audio file itself, which NotebookLM can transcribe; the resulting transcript serves as the basis for analysis. Best practices for generating accurate transcripts will be briefly outlined. While audio files may be uploaded directly, pre-generated transcripts are typically faster and more efficient for classroom use. Rather than requesting general corrections, students ask for feedback focused on recurring grammatical errors across their output. NotebookLM can analyze a single journal, a specific week, or all accumulated transcripts, allowing learners to control the scope of feedback. This flexibility helps students identify recurring problem areas, track changes over time, and select actionable points for reflection. By analyzing transcripts of their own spoken language, learners engage in metalinguistic reflection grounded in authentic L2 output. NotebookLM does not replace instruction or speaking practice but serves as a reflective guide, supporting a more intentional path toward oral fluency.
Beyond Reading: Challenges and Concerns for Embracing Human-AI Collaboration in Literacy #4630
Literacy has rapidly expanded and radically transformed in the time of digital society. Language literacy should extend beyond reading to embrace human-AI collaboration as a design process of meaning-making across diverse forms. This presentation shares the research-based practice to implement human-AI collaboration in English literacy. Sixty-nine undergraduate students from one Teacher Education program participated in the mixed-methods study to develop a multimodal project in their teaching disciplines. The project adopted Thai Basic Core Curriculum Strands and Standards to design locally relevant teaching materials for the lesson design, teaching practice, and virtual poster exhibition. Given the collaboration among AI-generated responses, peers, and teacher-guided feedbacks, they created storyboards, verified AI suggestions, and edited changes. Results from the reading assessment and digital literacy inventory led to envision AI-mediated literacy for enhancing language learning and preparing real teaching. Students significantly improved both reading and digital literacy. They reflected the adaptation of AI technologies on the inventory and student narratives from user logs and feedbacks over four dimensions, namely agency and interactive dialogue, critical evaluation and accountability, multimodal and creative synthesis, and self-awareness and framing. Implications addressed challenges and concerns for leveraging human-AI classroom practices through literacy experiences with discipline connectivity in language teacher education.
Ways to use AI for writing classes without just copy/paste, and student survey results #4632
AI chatbots have been shown to have positive effects on students' learning outcomes mainly due to the delivery of quick feedback, yet other studies have found that both students and educators have mixed perceptions of AI feedback, preferring it in supplementary form alongside educator-delivered feedback. How can teachers teach AI usage, give both kinds of feedback, and develop AI critical thinking? In this presentation, a task will be explained which attempts to tackle this problem. Participants (N=25) were second-year university students taking a required English writing class, for one 15-week semester. In week 1 (W1), students were taught how to use ChatGPT efficiently. They then completed weekly writing tasks during W2-W13, where they were required to make a mind map, write for 10 minutes without technology, then edit their writing with AI. The teacher also provided individual and general class-wide feedback. In W14-W15, a survey was conducted regarding AI instruction, usage, and opinions and preferences on both types of feedback. The survey results indicated positive perceptions of using AI, interest in more instruction and usage of AI for writing, as well as confirming preference for both AI alongside educator feedback.
Student Experiences and Reflective Practice in Virtual Exchange in a Japanese High School #4633
This practice- and research-based presentation focuses on Virtual Exchange (VE) in a Japanese high school classroom. Students were at A1–A2 English proficiency levels, with many exhibiting low confidence and limited motivation to study English. VE sessions were conducted in regular English lessons in collaboration with various overseas secondary schools to enhance motivation through meaningful intercultural interaction. Although students actively engaged in conversations with international peers, the sessions gradually became enjoyable one-off events rather than learning opportunities for developing deeper intercultural understanding. To explore students’ perspectives, qualitative research was conducted on how Japanese high school students experience VE. Retrospective interviews were carried out with students who participated in VE sessions, supplemented by an interview with their current teacher and reflective practitioner notes. Findings indicate that VE had a lasting positive impact on students’ motivation, with many recalling their interactions with international peers vividly. However, there was little evidence of students reflecting on their peers’ culture or demonstrating intercultural insights. Based on these findings, a reflective activity was developed to make VE a more meaningful educational experience. This presentation will provide insights into students’ experiences of VE and introduce a reflective activity that can be implemented in VE classrooms.
From Error Correction to Writer Development: EFL Student Writing Before and After AI #4634
This presentation compares student writing produced before the availability of generative AI tools with writing produced in the current AI-integrated classroom, drawing on samples from the same intermediate-level English writing course at a Japanese university. The study examines two comparable cohorts: approximately 30 students in 2022 (pre-AI) and in 2025 (post-AI). Both groups used the same textbook, writing tasks, and instructional approach. In 2025, “AI support” referred primarily to guided use of ChatGPT for language refinement with clear classroom policies on responsible use. A comparison of selected writing samples shows a reduction in grammatical error frequency per 100 words and improved readability scores in the AI-supported cohort. Before AI integration, instructors devoted substantial time to correcting surface-level errors, limiting attention to organization, argumentation, and writer voice. With AI-assisted drafts, grammatical accuracy improved sufficiently to allow more sustained focus on higher-order concerns, although challenges in coherence and rhetorical effectiveness remain. By presenting anonymized samples from both periods, this study demonstrates how AI reshapes instructional priorities. Rather than replacing writing instruction, AI mediates it, enabling a shift from error correction toward content development and rhetorical awareness. Attendees will gain practical strategies for designing AI-integrated writing classes that prioritize strengthening ideas and structure over sentence-level correction.
Beyond Bans and Blind Trust: Navigating Ethical Boundaries and AI Misuse in Japanese EFL Classrooms #4635
This presentation reports findings from a study examining how Japanese university EFL students use AI tools for academic tasks, how they interpret ethical boundaries, and how emotional and cultural factors shape decision-making. Data from survey responses and open reflections reveal that students want guidance rather than prohibition and view AI as a supportive tool rather than a replacement for learning. The findings highlight both successes and failures in classroom integration, ranging from increased confidence and clarity to academic dishonesty cases and AI misunderstanding of student needs. The presentation connects these classroom realities to broader questions of responsible AI use in CALL and considers what it means to “prevail or fail” when integrating emerging technology into language education.
Utilizing An AI Chatbot to Assist with the Development of Learner Self-Regulation #4636
In this presentation, a project investigating the use of a generative AI chatbot to support development of self-regulated learning (SRL) skills among university EFL learners will be discussed. Core SRL processes—including goal setting, planning, and strategy selection—often require co-regulation and scaffolding from instructors. However, this can potentially undermine learner autonomy, while variables such as class size and schedules can make it difficult to provide assistance when needed. One alternative is shifting the locus of co-regulation to AI systems capable of generating context-specific learning goals and study plans, allowing students to generate bespoke goals tailored to individual needs and interests. Using the poe.com platform, a chatbot was configured and implemented to guide students in creating, revising, and refining learning goals. In parallel, students maintained study logs to document progress, identify obstacles, and adjust goals or strategies in response to difficulties encountered. Initial findings suggest that students perceived the chatbot as helpful for goal generation and initially engaged with its recommendations, although engagement decreased over time. Mixed-methods data drawing on sources including chatbot records, study logs and a reflection questionnaire will be presented to illustrate how AI-supported interventions may be effectively tailored to the needs of language learners.
Designing Accessible Virtual Lessons: Teaching Motion Verbs Through Sound #4637
Virtual worlds are often used in second language (L2) teaching to visualize motion events and support learning of verbs such as come and go in Japanese. However, most designs rely heavily on visual input, limiting accessibility for visually impaired learners. In light of Title II of the Americans with Disabilities Act (ADA), which mandates that public institutions ensure equal access to digital instructional materials and services, accessible lesson design has become both a legal and pedagogical priority. This presentation introduces a prototype lesson design that enhances visual representations of motion with systematic auditory cues. In this model, changes in sound—such as increasing volume, directionality, and foreground/background contrast—work alongside visual input to represent movement toward or away from a speaker. The session demonstrates how these cues are carefully mapped onto motion meanings and embedded into a screen-reader-compatible virtual lesson segment. Rather than reporting a completed experimental study, the focus is on instructional design principles and practical implementation. Designed for language teachers, CALL practitioners, and TESOL graduate students, this session offers concrete strategies for creating more accessible technology-enhanced lessons and provides an adaptable framework for incorporating multimodal input into virtual language learning environments.
From Data to Dialogue: Using Corpus and AI to Enhance Self-Directed English Speaking #4639
Technology is advancing rapidly and is increasingly encouraged for use in language teaching and learning. This study aims to design a Corpus- and AI-aided self-directed English speaking training programme, examine its effectiveness, and explore learners’ attitudes toward this approach. Thirty-one undergraduate EFL learners participated. They completed a pre-test, a five-session Corpus- and AI-supported speaking training, a post-test, a learning portfolio, a survey, and a follow-up interview. Findings showed that participants’ overall speaking performance improved after the training, along with gains in four subskills: lexical resource (LR), grammatical range and accuracy (GA), fluency and coherence (FC), and pronunciation (PN). Among these, the greatest progress occurred in LR, while PN showed the least improvement. Learners agreed that the inclusion of a spoken corpus enabled them to identify linguistic features influencing oral proficiency, and that AI tools were effective in supporting speaking practice. The AI-enhanced environment also promoted interactive, self-directed learning. Most participants expressed positive attitudes toward the combined Corpus and AI-aided approach and demonstrated willingness to continue interacting with AI tools despite some limitations. However, a few learners reported their insufficient AI literacy and concerns about AI feedback accuracy. Overall, the study provides theoretical and pedagogical insights for enhancing English speaking instruction.
Gen AI and Second Language Writing A Corpus-Based Multidimensional Study of Engineering Undergraduates #4640
The emergence of Generative AI (GenAI) has significantly transformed the learning and writing practices of students in higher education. With the support of advanced large language models (LLMs) like GPT-4, students can now complete course assignments with greater quality and efficiency. Previous studies have extensively examined the situational and linguistic differences between human-written and AI-generated texts across various registers, highlighting the fundamental linguistic distinctions between the two (Goulart et al., 2024; Barabara et al., 2024). This study analyzes a corpus of final-year research reports written by undergraduate engineering students from an L2 context, aiming to identify dimensional variations in texts with differing levels of AI intervention (measured by AI scores) following the release of ChatGPT. Employing both qualitative and quantitative methods, we applied Biber’s (1998) multidimensional framework to examine lexico-grammatical features and distinguish AI-mediated texts at varying degrees from original student-authored reports. The findings reveal that reports with high and low levels of AI intervention differ in 19 linguistic features across three textual dimensions. Compared to Biber’s established genre profiles, the low-AI intervention group aligned more closely with academic writing, whereas the high-AI intervention group exhibited features resembling non-academic genres.
A Session-Level Framework for Analyzing Learner Engagement in Student–AI Interaction #4641
Research on AI-based conversational tools has consistently reported positive learner perceptions. However, this alone provides limited insight into whether AI use meaningfully supports language learning. Although some studies have compared AI-assisted and non-AI conditions, this research often offers little explanation of how learners engage with AI during interaction.
This study moves beyond perception- and outcome-focused evaluation by examining what learners do during AI-mediated interaction itself. The study introduces L-CARES (Learner-Centric Analysis of Response and Engagement Sequences), a session-level analytical framework designed to capture observable learner engagement across complete student–AI dialogue sessions. Using transcript data from 18 first-year Japanese English majors who completed weekly chatbot role-play tasks across two academic terms (22 sessions total), L-CARES examines patterns of contingency, agency, attentional repair, elaboration, discourse management, and self-monitoring.
Preliminary application of the framework demonstrates how session-level engagement patterns can be identified and compared across AI-mediated interactions, helping explain variability in learning trajectories despite consistently positive learner perceptions. Rather than evaluating effectiveness in terms of access or usage frequency, the framework foregrounds how learners interact with AI within task constraints. The presentation concludes by discussing implications for CALL research and classroom practice.
Running a Game-Based Teaching/Research Engine in the Red: Failing, Prevailing, and Downshifting in DGBLLT #4643
This presentation examines what happens when a game-integrated Pedagogy of Multiliteracies project prevails in terms of learning and research outcomes, but fails as sustainable practice. Drawing on design-based action research across two Japanese university cohorts, I reflect on a DGBLLT design that used board and digital games, student-as-researcher tasks, and mixed-methods data to support language, literacy, participation, and well-being. The project produced strong outcomes, including gains in off-list vocabulary, grammar, speech acts, and students’ self-rated curiosity and happiness. However, these gains came at a cost. More than 1000 hours of design, implementation, feedback, and analysis created recurring pressure points, including grading overload, data avalanches, uneven group dynamics, and physical and emotional strain. One major lesson was that rich instrumentation can support valuable evidence of learning while also making a project difficult to sustain. In response, I propose a “permaculture” approach to DGBLLT: reusing open materials, simplifying research and assessment routines, embracing constraints, and protecting space for play. The session argues that the key question is not only whether game-based teaching succeeds, but whether it can be maintained, adapted, and shared without exhausting teachers. Participants will leave with practical heuristics and tools for redesigning CALL projects for both impact and sustainability.
An Integrated AI Analysis of Grammatical and Lexical Patterns with Feedback in Academic Writing #4645
This presentation examines the potential of the free version of GPT-4o as an AI-assisted tool for cross-textual error analysis in EFL academic writing. The dataset consisted of ten academic essays of approximately 600 words each written by second-year undergraduate students enrolled in an elective English course at a public university in Japan. The essays, focusing on disease-related risk factors, were analyzed collectively using structured prompts designed to elicit systematic error categorization, frequency reporting, and simple feedback. The analysis identified error types widely documented in SLA research on Japanese learners; however, article misuse accounted for approximately 33% of all identified errors, followed by preposition errors (18%) and subject–verb agreement errors (15%). Rather than discovering new categories, the study evaluates whether ChatGPT can rapidly aggregate patterns across multiple texts, quantify their relative distribution, and prioritize high-impact errors for instruction. From a CALL perspective, the primary contribution of this study lies in instructional mediation. Through the use of AI for common writing error detection and correction, teachers can spend more time focusing on deeper dimensions of writing, such as the clarity of arguments, coherence, and logical structure.
When Things Fall Apart: Productive Failure and Recovery in Three Virtual Exchanges #4646
Virtual exchanges (VEs) provide opportunities for authentic communication and intercultural learning, yet real-world implementation often involves unexpected challenges. This presentation examines three cases of productive failure across different contexts. The first case involved two classes of 25 Japanese university students (A2–B1) paired with classes of approximately 50 students in Spain and Türkiye (B2–C1). A substantial proficiency mismatch required strategic grouping, task reframing, and expectation management to maintain engagement. The second case connected 220 Japanese students with 180 Korean students (A2–B1) in a large-scale VE. Despite detailed planning, technical problems and inconsistent partner procedures disrupted activities, underscoring the importance of deliberately simple platforms and task designs. The third case concerned a Kaken grant-funded VE with a Korean university that collapsed shortly before launch. Rapid redesign and the use of alternative VE resources enabled students to complete meaningful activities despite the cancellation. Across these cases, each setback offered clear reasons to terminate the exchange, yet quick decision-making and reflective redesign led to more resilient systems. The presentation concludes with practical strategies that other instructors can adapt to strengthen their own VEs and, ideally, inspiration to develop creative solutions when facing their own unexpected difficulties.
AI-Mediated Revision and Reflective Writing #4649
Generative AI writing tools are increasingly used by university students, and their pedagogical potential in EMI classrooms is often discussed in relation to language accuracy and proficiency development. This presentation reports on the use of an AI-based writing and revision workspace in an undergraduate EMI class that emphasizes content explanation rather than English instruction. In this class, students first read course texts and produced short written explanations interpreting the author’s main argument. They then used the AI workspace to reorganize or reframe their texts. Rather than focusing on grammatical correction, students compared their original and AI-revised versions to examine shifts in meaning, argumentative emphasis, and stance. The AI workspace was used as part of the revision process, with reflection centered on textual changes rather than tool usage itself. A brief pre–post reading task explored whether students became more attentive to stance and argumentative emphasis after the activity. Classroom observations and student reflections suggest that AI-mediated revision made interpretive changes more visible, prompting students to reconsider how clearly and faithfully they represented the author’s position. The presentation discusses how such activities may inform assessment practices by foregrounding interpretive judgment and stance awareness as observable components of reading development.
Affective Entanglements: Rethinking Criticality in Human-AI Learning Practices #4650
Research on AI in education often remains anthropocentric, separating humans ‘us’ (the knowing and agentive actants) from AI/technology ‘it’ (the object, the liability, or output). This framing simplifies the complexity of human-AI intra-actions and shapes how we understand what it means to be ‘critical’ with AI. Adopting a posthumanist perspective (Barad, 2007; Jones; 2025), this paper rethinks criticality as part of our AI-mediated language learning.
This study draws on a novel digital enthnographic method using multimodal digital portraits with 50 Chinese, Thai and Dutch university students. Using elicitation prompts and follow-up interviews, these digital portraits trace how learners feel, understand and make sense of their intra-actions with AI. Data were analysed through iterative, reflexive coding informed by posthumanist theory, attending to affective, embodied, and discursive dimensions of human–AI intra-actions.
Findings positions criticality in AI intra-actions as the ways in which language learners reconfigure and reshape relations; what happens when learners and AI together reshape the boundaries of what counts as knowing, learning and understanding. For educators, this work reframes critical engagement with AI as an emergent, embodied practice and offers the digital portrait as a pedagogical space for exploring how learners make sense of AI in language learning contexts.
Connecting VR Immersive Experiences with L2 Narrative Writing Tasks: Emotional Perspectives and Pedagogical Issues #4651
L2 writing process has been analyzed using cognitive process models of planning, translating, and reviewing (Flower & Hayes, 1981), or through sociocultural lenses as a mediated activity influenced by discursive genre and writer identity (Canagarajah, 2018; Lea & Jones, 2011; Matsuda, 2015). Although research has emphasized the importance of emotions and motivation for L2 learners, the influence of emotional resources on L2 writing remains underexplored. This study examines the effects of emotionally differentiated narrative tasks on linguistic complexity and emotional engagement in L2 writing. 41 intermediate French-as-a-foreign-language (FFL) learners at an Emirati university completed three narrative tasks over three weeks, using memory-based and immersive 2D/3D VR stimuli. Presence (ITC-SOPI) and affect (PANAS) questionnaires were administered. 123 texts (average 200 words) were analyzed with MAXQDA to measure linguistic complexity, sensory density, and emotionality. Results showed the role of emotions in the writing process and performance (Feng & Ng, 2023; Guan et al., 2024), as well as the value of VR-supported L2 writing pedagogy (Makransky & Petersen, 2021). Whereas VR stimuli and stronger emotional involvement encouraged a more personal writing style, high emotional load was associated with reduced coherence and weaker self-regulation. These findings redefine success in VR L2 writing: emotion boosts engagement and voice, yet can impair coherence and self-regulation, requiring calibration and scaffolding.
What do the large-scale data on AI tell?: Exploring students’ perceptions of teachers’ use of AI tools in English language teaching and assessment #4652
Although AI is increasingly used by teachers in university language classes, how students perceive teachers’ use of AI in instruction and assessment remains unclear. This study aims to examine university students’ perceptions of the teacher’s use of AI in course instruction and assessment. A wide survey was conducted in mid-2024. Participants in this mixed-gender study were 990 Japanese EFL undergraduates aged 18 to 23. Among the 55 broad-ranging survey questions, this presentation focuses on the results obtained regarding students’ perceptions of teachers’ use of AI tools in classroom activities and assessments. More than 52% of the respondents reported that GenAI was used in some way in their university English classes, particularly for writing (48%) and reading (39%), whereas 46% of respondents wanted GenAI to be used more in speaking instruction. 65% of students preferred writing assessments by teachers or teachers collaborating with AI over AI assessments. Additionally, approximately 63% reported that they still need to learn English, even if AI replaces many English-language tasks. These findings indicate that both teachers and students should be responsible for the use of AI, providing evidence to support its use in classrooms.
Implementing Real-time, Rubric-based Peer Assessment in EFL Presentation Courses: Evidence on Alignment, Comment Quality, and Perceptions #4654
Peer assessment can improve learners’ performance (Double et al., 2020), yet its impact may be reduced when feedback is delayed (Shute, 2008) and peer comments vary in quality (Topping, 1998). This paper reports a PhD pilot of a low-cost workflow that collects rubric-based peer feedback during academic presentations and provides an immediate consolidated report. In two university EFL presentation courses in Japan, students (N=89) used Google Forms/Sheets to rate peers on an analytic rubric and add comments; the instructor completed the same rubric as a benchmark.
Data include peer and teacher scores, peer comment texts, and a post-task questionnaire. Analyses examine peer–teacher alignment by criterion, comment specificity/usefulness, and student perceptions of usability and fairness. Results show high participation and rapid feedback delivery. Peer scores were generally lenient, with the largest gaps on language accuracy and delivery, and closer alignment with the teacher’s ratings was associated with more specific, actionable suggestions.
The presenter will also outline future research directions integrating GenAI support as a Trainer (to develop higher-quality, criterion-referenced feedback) and a Synthesizer (to consolidate peer inputs into actionable reports) (Zhan et al., 2025). The session will benefit language teachers and CALL researchers seeking scalable speaking assessment and peer feedback practices.
Exploring GenAI-enhanced Business English Presentation Skills in a Mixed Class of College Students and Workplace Employees #4655
This study examined the effectiveness of a business English presentation course enrolling 24 participants, including 18 undergraduate students and 6 full-time employees from a retailing company. A 7-week workshop alternated weekly between (1) conventionally prepared presentations created through manual drafting and visual design, and (2) GenAI-enhanced presentations developed with tools such as Google NotebookLM and Napkin AI. The study explored whether learners’ performance varied across these formats and how their perceptions of presentation learning shifted over time. Data sources included performance assessments conducted by an industry HR manager and the instructor using a shared rubric, pre- and post-course questionnaires on confidence and perceived learning, and post-course interviews on the strengths and challenges of each preparation mode. The findings indicate limited differences in overall performance, with English proficiency serving as the strongest predictor. GenAI-enhanced presentations offered notable advantages in efficiency and visual quality, while conventionally prepared work better supported idea development, negotiation, and collaboration. Rather than competing approaches, the two formats functioned as complementary pedagogical pathways that cultivate presentation literacy, workplace problem-solving, and authentic communication skills.
From Prompts to Performance: Investigating AI Use Motives and Writing Behaviors in Nursing Education #4656
This study explored nursing students’ motivation and behavioral patterns in a GenAI-mediated English copywriting task focused on medical innovation. Participants were 43 second-year nursing majors in a required ESP course at a private Taiwanese polytech university. The core task required students to identify a clinical nursing challenge and "invent" a conceptual medical product, followed by creating a promotional campaign. Using AI as a collaborative partner, students applied the AIDA framework (Attention, Interest, Desire, Action) to structure their persuasive copywriting.
Quantitative results indicated high intrinsic and utility value toward AI-supported learning. Notably, higher frequencies of AI prompt iterations were positively associated with stronger self-regulation and organization scores on the MSLQ. Qualitative data revealed that high-performing students utilized iterative prompting to balance technical clinical specifications with empathetic marketing language. While AI improved fluency and reduced writing anxiety, participants noted challenges in maintaining a unique "human" voice. This process—navigating between complex clinical logic and persuasive communication—served as a primary indicator of developing digital literacy. Overall, the findings suggest that AI-assisted innovation tasks can significantly enhance motivation, provided that explicit instruction in prompt design and the AIDA framework is integrated into the curriculum.
Becoming an Independent Business Presenter: Enhancing ESP Presentation Skills Through GenAI-Based Practice #4658
This study examined how three instructional methods influenced the development of independent learning skills and business presentation performance among 18 college EFL learners with mixed English proficiency levels. Over a nine-week Business Presentation in English workshop, students engaged in one of three practice conditions: (1) an AI-mediated practice model using generative AI tools, including HeyGen, to create avatar-based presentation simulations; (2) a self-recorded video practice model in which learners uploaded and reviewed their own rehearsal videos; and (3) no additional training beyond standard instruction. Three research questions guided the inquiry: (1) Which instructional method produced the strongest presentation performance? (2) Which practice mode proved most suitable for learners at different proficiency levels? (3) How did students perceive the benefits and challenges of each method? Instructor and HR-specialist evaluations indicated that the AI-mediated practice model yielded the strongest overall presentation performance, particularly in organization and delivery. The findings further showed that generative AI tools, which created avatar-based presentation simulations, functioned as highly valued models that students followed to refine their own performance. However, individual differences—including English fluency, level of commitment, and degree of prior preparation—also substantially shaped learning outcomes. Survey and interview data revealed generally positive perceptions of AI-supported rehearsal, especially among lower- and intermediate-proficiency learners.
Understanding the Role of Digital Multimodal Composition in CLIL Content Learning: Insights from a Legal English Course #4659
This study investigates how digital multimodal composing (DMC)—the creation of meaning through the integrated use of modes such as written text, images, audio, video, and animation—supports disciplinary content learning in a content and language integrated learning (CLIL) context. The study is situated in a legal English course for university-level English majors, where students engaged in DMC tasks such as scripting, filming, and editing short explanatory videos on legal topics. Although prior DMC research has largely focused on language instruction, its potential for facilitating subject knowledge development in content-based courses remains underexplored. Drawing on data from eleven student-produced videos created by 32 students, 32 accompanying reflective essays, and records of the composing process, the study examines when and how DMC contributes to content learning. Using thematic analysis and multimodal analysis, the findings show that content engagement varies across stages of the DMC process. Substantial disciplinary learning occurs during topic selection and planning, while scripting promotes the integration and reformulation of specialized knowledge for non-expert audiences. Later production stages emphasize multimodal design and involve less explicit content learning. Overall, DMC supports content learning through conceptual integration, expansion, and recontextualization, highlighting its value for higher education CLIL settings.
Ed‑Venturers in Action: A Four‑Theme Hybrid CALL–PBL Cycle for Grammar, Writing, and Presentation in a Grade‑4 EFL Class #4660
Ed-Venturers in Action is a four-theme instructional model for a Grade 4 EFL class in Thailand (N≈40) integrating CLIL and PBL, with CALL as supportive practice. It was developed to address a common pedagogical need in primary EFL: grammar lessons can feel repetitive, class time is limited, and learners often fail to transfer target forms into meaningful writing and speaking. Over eight 50-minute periods, each theme follows a two-period cycle. Period 1 uses a short CLIL text (city, country, continent, global issues) to introduce one grammar target, followed by brief digital game micro-practice (Wordwall, Quiz.zep.us, Baamboozle) to sustain attention; similar games may be assigned for home review. Period 2 guides students with sentence frames to write a 70–90 word theme summary, draft a short presentation script, and rehearse speaking. The sequence culminates in a three-panel Environmental Trifold Poster (Lampang–Thailand–World) and a group presentation. Using a one-group pre–post design, data include a 30-item grammar test administered via Google Forms on iPads, writing and speaking rubric scores, a brief SEL/GCED survey, engagement checklists, and game logs (accuracy, attempts, time). The session shares classroom evidence and ready-to-use materials for teachers seeking a practical, low-prep model.
Speaking Without Fear: Action Research on Using AI Speak-Mode to Lower Foreign Language Speaking Anxiety #4661
Foreign language anxiety (FLA), particularly related to speaking, remains a persistent challenge among Asian learners of foreign languages. With the rapid advancement of generative artificial intelligence (AI), language education is undergoing substantial transformation. Among emerging tools like ChatGPT, especially its Speak Mode, provides real-time, low-stakes, and judgment-free interaction, offering learners opportunities for spontaneous oral practice that may relieve speaking-related anxiety. This study examines the effects of ChatGPT Speak Mode on FLA among beginner-level Japanese as a Foreign Language (JFL) learners. Two research questions guide the inquiry: (1) Does sustained AI-mediated speaking practice reduce learners’ FLA? and (2) How do learners perceive the affordances and limitations of AI-based speaking practice? Employing a quasi-experimental design, the study involves approximately 55 first-year students enrolled in beginner Japanese conversation courses at a private university in northern Taiwan. Over two semesters (32 weeks), participants engage in regular dialogue practice using ChatGPT Speak Mode. Quantitative data are collected through pre- and post-intervention FLA questionnaires and analyzed using paired-samples t-tests, while qualitative data are examined through thematic analysis. The findings aim to contribute empirical evidence to the underexplored area of long-term AI integration in JFL instruction and offer pedagogical implications for reducing speaking anxiety in foreign language classrooms.
Speaking English as a Lingua Franca: A COIL-based PBL Course for College EFL Learners #4662
This study reports on an innovative course that integrates Project-Based Learning (PjBL) and Problem-Based Learning (PmBL) into a freshman English listening-and-speaking class at a Taiwanese university of technology. In collaboration with a sister university in Thailand, the course incorporates a 9-week Collaborative Online International Learning (COIL) module in which cross-national student teams complete project-based tasks. During the subsequent 9 weeks, instruction shifts back to the classroom and adopts a PmBL model to strengthen problem-solving abilities and practical language use. This hybrid design merges project-based and problem-based pedagogies to enhance communicative fluency and intercultural understanding. Employing a mixed-methods approach, the study utilized quantitative scales for intercultural sensitivity (pre/post-test), teamwork, and course effectiveness. Qualitative data were gathered via semi-structured interviews and institutional teaching evaluations to explore self-perceived linguistic gains and challenges. Results indicated significant gains in intercultural sensitivity, particularly among Thai students, and high satisfaction with team interaction (Thai: 4.12; Taiwanese: 3.87 on a 5-point Likert scale). Participants reported improved listening and speaking skills, valuing the digital synergy for fostering creativity and authentic interaction. The findings highlight the potential of CALL-mediated telecollaboration to support cross-border experiential learning and offer a scalable pedagogical model for technology-enhanced EFL instruction.
Using AI-assisted narrative simulations for teaching economics in the EFL classroom #4663
Generative AI offers new ways to adapt course materials to students’ language levels and disciplinary needs. In this presentation, I share how I used generative AI (e.g., Gemini for content creation and code generation) in an undergraduate English for Specific Purposes (ESP) economics course for non-native English-speaking students to support their understanding of key economic concepts while remaining sensitive to linguistic challenges. After briefly outlining how AI helped me tailor materials in this context, the talk focuses on how AI allowed me to move beyond traditional, text-based resources. I will demonstrate the development of interactive, scenario-based simulations inspired by narrative adventure games. Developed using AI for both content creation and coding, these simulations allow students to explore economic situations, make policy choices, and immediately see how their decisions affect outcomes, encouraging active engagement and reflection on economic trade-offs. The session will conclude with a live demonstration of the custom-built tool and a discussion of its classroom use. Overall, the presentation aims to show how AI can help make economics more interactive, accessible, and even enjoyable, perhaps challenging its long-standing reputation as the “dismal science,” while offering practical ideas that participants can adapt for their own teaching contexts.
Prevail or Fail? Supporting Learner Confidence in Aviation English Through Pedagogically Mediated AI Use #4664
Under the theme “Prevail or Fail?”, this presentation reports on a classroom-based intervention examining how scaffolded AI-supported rehearsal tasks may influence learner confidence among ground staff students at a Japanese aviation college. The project involves one intact aviation communication class of approximately 35–40 students meeting once per week for two consecutive sessions over six weeks. Traditional communication instruction continues in the first session, while AI-supported rehearsal activities are implemented in the second. In customer-facing aviation contexts, learners often experience communicative anxiety despite possessing adequate procedural knowledge. This project explores three guiding questions: (1) under what instructional conditions AI-supported rehearsal appears to influence confidence in handling unpredictable passenger interactions; (2) how students experience AI when framed as a rehearsal partner rather than an evaluative authority; and (3) what unintended patterns emerge, including over-reliance or heightened perfectionism. Drawing on self-regulated learning (Zimmerman, 2002) and sociocultural scaffolding (Vygotsky, 1978), AI is positioned as a structured rehearsal scaffold. Tasks include simulated passenger service scenarios (e.g., delay explanations and complaint handling) followed by guided reflection. Data consist of anonymised learner reflections, classroom observation, and instructor field notes, enabling analysis of emerging themes related to confidence development and risk.
The Feedback Loop: How ESL Learners Navigate Autonomy and Trust in ChatGPT Interactions #4665
Generative artificial intelligence (GenAI) is rapidly transforming English language learning by offering personalized, interactive support for vocabulary development (Moorhouse et al., 2025). Yet, while linguistic outcomes are increasingly studied, less is known about how learners experience these tools (Zhang, 2025). This study investigates the ways university‑level ESL students interact with generative AI and how this interaction influences their motivation, confidence, and sense of autonomy.
87 undergraduate ESL learners, mainly L1 Japanese speakers with some Chinese and Korean participants, used ChatGPT-4o in a medical vocabulary course. Two groups completed identical tasks: one received English-only feedback, while the other utilized their first language (L1) for feedback and prompt adaptation (Moorhouse et al., 2025; Yang & Lin, 2025). Data from surveys and interviews captured perceptions, usability challenges, and learning outcomes.
Findings indicate that GenAI boosts motivation through immediate, contextualized vocabulary explanations (Xiao & Zhi, 2023). However, learners faced limitations like repetitive feedback (Yang et al., 2025) and unreliable voice recognition, which affected trust (Mompean, 2024).
The study highlights integrating GenAI within pedagogically scaffolded instruction (Park & Kim, 2025). When guided effectively, AI tools promote reflective learning, critical engagement, and greater learner autonomy (Park & Kim, 2025). These insights help TESOL practitioners understand how GenAI can be meaningfully incorporated into communicative vocabulary teaching.
Patterns of Reflective Meaning in Padlet-Based Self-Reflection: A Qualitative Study in CALL-Supported EFL Courses #4666
Digital technologies are increasingly used in language classrooms to support learner reflection; however, less attention has been paid to how reflective meaning is constructed in technology-mediated environments compared to traditional pen-and-paper formats. This qualitative study examines 100 self-reflection posts written by 100 individual students across three CALL-supported EFL classes, each submitted on Padlet at the end of the course. Unlike conventional handwritten reflections, which are typically private and linear, Padlet-based reflections enabled peer interaction through comments, multimodal expression through emojis, and easy access across devices. These affordances introduced a more socially mediated and affectively expressive reflective space. All reflections were analysed using reflexive thematic analysis, with reflective segments serving as the unit of analysis. Drawing on reflection as meaning-making, the study identifies recurring patterns in how students articulated learning challenges, evaluated strategies, and developed awareness of their progress and learning needs. Rather than merely reporting task completion, students engaged in critical self-examination by acknowledging difficulties, reassessing assumptions, and identifying areas for improvement. The findings contribute to CALL research by demonstrating how interactive digital platforms can extend reflective practice beyond individual written reporting toward socially situated reflective meaning-making.
Using AI Illustration Prompts to Enhance Descriptive Writing #4667
This practice-oriented presentation examines the use of AI-generated illustration prompts in a Japanese university EFL writing course to support the development of descriptive vocabulary and adjective use. While AI tools are often discussed in terms of concerns about student misuse or overreliance, this project demonstrates how guided AI use can encourage active language production and lexical precision. Over the course of a semester, students wrote original fairy tales as part of a creative writing project. After drafting their stories, students generated AI illustrations by composing detailed prompts describing characters, settings, and actions. Initially students failed to get the images they wanted, so to achieve their intended visual outcomes, students were required to revise and refine their language, paying close attention to adjective choice. In this way, AI functioned not as a shortcut for writing, but as a constraint that prompted deeper engagement with vocabulary and descriptive language. The presentation will describe scaffolding techniques used to support effective AI integration and examples of student writing and illustration prompts, will be shared to illustrate how learners’ use of descriptive language developed throughout the semester. The session will offer practical guidance for educators interested in incorporating AI-supported creative writing tasks into EFL classrooms.
Ludic Translingual Subtitling: Transcreation as Multimodal Pop Edutainment #4668
The rise of social media has transformed translation and language learning. Bilingual YouTubers, such as Hailey Mo and Adam, act as intercultural mediators by mobilizing English and Chinese alongside multimodal resources—including bodily cues, digital features, and pop-cultural references—to teach authentic expressions. Furthermore, they act as informal teachers by creating online English courses, effectively bridging the gap between social media entertainment and pedagogical instruction. This study explores integrating these translingual practices into the classroom through three key concepts:
• Transcreation: As AI and and computer-aided translation handles technical translation, learners must focus on creatively reconstructing texts across cultures (Katan, 2016; Liang, 2025). • Multimodal Language Education: Students use internet databases and automatic translation tools to build linguacultural skills, shifting away from traditional teacher-led authority (Raído et al., 2020). • Ludic Practice: Creators use edutainment (Lee, 2025) and creative subtitling to make learning engaging and fun.
By analyzing conversational clips and screenshots from popular videos (e.g., comparing 7-11 cultures in Taiwan vs. the U.S.), this research demonstrates how real-world digital data helps learners discover how language functions in modern spaces. The study suggests that repurposing data-driven social media activities helps students develop the essential linguacultural skills and digital literacies required in a globalized world.
Beyond the Hype: A Mixed-Methods Analysis of Successes and Setbacks in AI-Assisted EFL Instruction in Vietnam #4669
With AI chatbots moving from novelty to mainstream instructional tool, it is imperative to determine what constitutes pedagogically "prevailing" and "failing". This study will investigate how Vietnamese EFL instructors use AI tools and the elements which contribute to success versus pedagogic "friction". Using UNESCO AI Competency Framework (2024) and TPACK, this sequential mixed-methods study (N = 40), used an online survey instrument followed by semi-structured interviews with eight of the most frequent users of AI tools. Data were analyzed to identify "critical incidents" - the specific moments when AI generated materials or responses did or did not enhance learning. The preliminary results indicate "prevails" are typically efficient adaptations of educational materials and diagnostic feedback; whereas, "fails" are most commonly identified with cognitive overload and task designs that are overly generic or not sufficiently contextualized. Furthermore, the study identifies how teacher identity and institutional constraints affect the outcomes of these processes. This research provides a realistic guide to AI adoption in the Global South. The study also provides practical design principles for teacher training programs, arguing for a movement beyond basic "tool literacy" to develop "AI pedagogical resilience", so that educators have the ability to adjust when technology does not meet educational objectives.
Beyond “Time-Saving”: Designing an Offline LLM Feedback Assistant with Teacher-in-the-Loop Oversight #4670
Providing feedback on student writing is time-intensive, but using online generative-AI tools like ChatGPT raises concerns about student privacy and teacher accountability. This session reports on an open-source, offline feedback assistant that runs on teachers’ desktops/laptops using small, local large language models (LLMs). The goal is to reduce time spent on writing feedback without putting student work online while maintaining pedagogically appropriate feedback for different proficiency levels. Methodologically, the development of the assistant required a purpose-built benchmark to compare candidate LLMs on one-shot judgement tasks, such as judging the effectiveness of topic sentences and aligning claims and evidence accurately. Results from the one-shot tests on controlled, synthetic learner texts show that tiny models like the one-billion parameter TinyLlama model frequently hallucinate and make logical errors. Even larger local models, such as the 20-billion parameter GPT-OSS-20B model, prefer overly academic registers in their feedback, reducing suitability for lower-proficiency learners. For CALL practice, the findings highlight a key trade-off: privacy-focused offline feedback demands nuanced GenAI engineering but cannot substitute for teacher oversight. Teachers who wish to preserve student privacy will need new skills to review, correct and contextualize model output and to have an explicit understanding of model limitations.
EiTake: An English Learning Web App for Teachers and Young English Learners #4673
EiTake is an online English learning and teaching application designed for use in Elementary School and Junior High School classrooms. It is a library of browser games and tools that appeal to students through colorful layouts, pop culture references, and reward feedback loops while aligning with standard classroom learning materials. The software is free and available for anyone to use, and is meant to act as a supplementary resource for use alongside standard English curriculum textbooks.
This project theorizes that students engage and react more positively when early English learning materials are interactive and provide some immediate form of feedback. The introduction of individual tablets in public schools has created the opportunity to expand on the use of interactive digital teaching materials. EiTake is an example of how regular teachers can create community resources for English learners and other teachers, and hopes to further measure the affects of using digital tools and games on long term English learning. EiTake also aims to provide more freedom to teachers in designing and running their lessons by removing the need to create complex interactive materials themselves.
Using Generative AI to Create Extensive Listening Podcasts #4674
Graded language materials can be highly beneficial to EFL students, however, creating them can be time-consuming and challenging. This may explain why there is a relative lack of extensive (graded language) listening materials in many EFL teaching and learning contexts. This classroom-practice presentation reports on AI-generated podcasts as one part of an ongoing project to create language-graded materials using generative AI. The presenters will demonstrate how Large Language Models (LLM) can be used to create graded-language scripts for podcasts, and how these scripts can be used to generate podcast-style listening materials using generative text-to-speech (TTS) platforms. The materials are currently being used with Japanese university EFL students whose English proficiency is at the CEFR B1-B2 level. The presenters will share student feedback about the podcasts, as well as the process and prompts used to create the scripts and the podcasts. This will allow presentation participants to adapt the workflow to their own teaching and learning contexts.
Integrating AI into a First-Year Academic English Course: Scaffolding Research, Writing, and Presentation Skills #4675
This presentation reports on the design and implementation of a first-year Academic English course that systematically integrates four AI tools— such as Research Rabbit, Zotero, ChatGPT, and Talkpal—to support students’ research, academic writing, and presentation skills.
The course is organized around three scaffolded assessments on a self-chosen topic: (1) a research paper proposal, (2) a problem–solution essay, and (3) an oral presentation. Each assessment is paired with targeted AI-supported activities. For the proposal, Research Rabbit helps students explore a research field, identify key works, and narrow their focus, while Zotero supports source management, note-taking, and accurate citation. For the problem–solution essay, ChatGPT is used for brainstorming, outlining, language feedback, and revision, with explicit guidance on prompt design, verification of AI output, and academic integrity. For the oral presentation, Talkpal offers individualized practice in pronunciation, fluency, and delivery, complementing in-class training.
The presentation will showcase concrete lesson designs, sample activities, and reflective tasks that encourage critical and strategic use of AI rather than passive dependence. It will also address challenges such as uneven digital literacy, over-reliance on AI, and the need to strengthen students’ critical evaluation of AI-generated content, concluding with practical recommendations for integrating multiple AI tools into EAP courses.
Generative AI in Multilingual Language Classrooms: A Systematic Review of Learning Outcomes and Pedagogical Conditions #4676
This systematic review synthesised findings from 40 studies on the pedagogical impacts of generative AI in multilingual and bilingual language classrooms. Evidence shows consistently positive effects on writing, with medium-to-large improvements in vocabulary use, organisational quality, and grammatical accuracy (Gao, 2023). Speaking outcomes demonstrated the strongest gains, with year-long interventions yielding learning improvements more than twice those of conventional instruction, especially when systems provided real-time, multimodal feedback (Zhao & Hua, 2025). Affective benefits were similarly robust, including reductions in anxiety and increases in self-efficacy and self-regulated learning (Zhao & Hua, 2025). Findings for reading comprehension and critical thinking were more mixed. While several studies reported clear improvements, others found minimal effects or highlighted risks of reduced independent analysis (He, 2025; Eun & Bae, 2024). Outcome variability was explained by six mechanisms: task complexity, scaffolding design, feedback timing and modality, cultural and linguistic contextualisation, learner proficiency, and intervention duration. AI was most effective when integrated into pedagogically structured tasks that preserved learner agency, provided adaptive and culturally relevant scaffolding, and offered timely multimodal feedback. Limitations emerged when AI was used passively, lacked cultural adaptation, relied on delayed feedback, or failed to sustain engagement over time. Overall, the review emphasises that AI’s impact is contingent on thoughtful, context-sensitive instructional design.
Enhancing Chinese EFL Learners’ Connected Speech Through AI-integrated Training #4677
Recent years have seen increasing integration of AI tools in pronunciation training with proven effectiveness. However, their application to connected speech processes (CSPs) remains underexplored. English CSPs, which involve various types of sound adjustments, are widely used by native speakers in daily communication but pose substantial challenges for Chinese EFL learners in both perception and production. This study aims to develop and evaluate an AI-enhanced CSP training package comprising three components: explicit instruction, perception practice, and production practice. The package integrated materials generated by Murf (a text-to-speech tool) and feedback from Doubao (a generative AI chatbot). Its effectiveness was evaluated by comparing 18 intermediate-level Chinese university students with Northern Mandarin as L1 on sentence dictation and sentence reading-aloud tasks before and after eight online training sessions. Results indicated that participants’ perception and production of CSPs improved significantly (p < .001). However, the training effects exhibited an asymmetrical pattern across the six target CSP types (i.e., consonant-vowel linking, elision, vowel-vowel linking, assimilation, vowel reduction, and multiple) and between perception and production tasks. These findings support incorporating CSP instruction into regular English classrooms and highlight the potential benefits of strategic AI tool implementation in CSP-focused pronunciation training.
From Public Fail to Participation Prevail: Relational creativity and the global collaboration threshold in Japanese university EFL #4680
This pilot study begins with a familiar fail: digitally competent students engaged in group tasks still hesitate to participate visibly on platforms such as Padlet. It asks which creativity orientations are most closely associated with digital learning dimensions that support online participation and global collaboration. Ninety first-year students enrolled in compulsory English communication courses at a national STEAM university in Japan completed two self-report questionnaires via Google Forms in autumn 2025. The first was a creativity style questionnaire measuring internal thinking, expressive risk-taking, social attunement, and collaborative facilitation. The second was an ISTE-aligned digital learning questionnaire assessing digital citizenship, collaboration, communication, and responsible participation. Both used five-point Likert scales; data were analysed using descriptive statistics, reliability analysis, and correlation analysis. Findings reveal a digital citizenship–collaboration gap: students report confidence in responsible digital participation yet weaker readiness for outward-facing collaboration. Relationships between creativity and digital learning become visible only when creativity is disaggregated by quadrant. Relational orientations, social attunement and collaborative facilitation, show the strongest links to digital learning profiles, while expressive risk-taking shows weaker connections. These findings suggest participation in platform-based EFL tasks is a threshold shaped by the relational costs of visibility, with implications for scaffolding technology-enhanced collaboration.
Vooks.com is the opportunity enhancing English skills #4681
This presentation illustrates our four-year experience using Vooks.com, an animated storybook platform, to enhance English learning among secondary school students. Vooks.com integrates text, narration, and animation, creating an immersive environment that makes reading both enjoyable and meaningful. Through consistent classroom use, we have observed significant growth in students’ reading fluency, listening comprehension, vocabulary acquisition, and overall engagement with English texts. The platform’s storytelling approach has also encouraged creative thinking, collaboration, and confidence in language use. During the session, participants will explore practical strategies for integrating Vooks.com into lessons, designing interactive activities, and assessing skill development. By sharing classroom examples and outcomes, this workshop aims to demonstrate how digital storytelling can transform traditional reading instruction into a dynamic, multi-skill learning experience that sustains students’ motivation and love for English. Initially in 2022, forty students from 8th grades had read vooks.com and after 4 years the number of readers reached to 250. Reading digital books affected students' English learning positively. This presentation frames Vooks as a free-to-use educational tool and is not commercial.
Professional Development as Sensemaking in CALL Teaching Practicum With Generative AI #4684
Teaching practicum is a critical site for professional development, where pre-service English teachers must negotiate professional judgment in response to pedagogical and ethical uncertainty increasingly shaped by generative AI in CALL-related instructional contexts. Adopting a sensemaking perspective—understood as the process through which individuals interpret and respond to uncertain situations—this study examines how final-year pre-service English teachers position generative AI as a pedagogical resource within their emerging professional practice during practicum. Drawing on scenario-based tasks situated in AI-mediated teaching situations and guided reflections collected during practicum, the study explores how participants reason through CALL-related AI dilemmas. The analysis suggests that pre-service teachers engage in ongoing professional sensemaking, positioning generative AI as a contested CALL resource shaped by tensions between supporting student learning, ensuring fairness, and navigating ambiguous institutional expectations. The study conceptualizes professional development as situated judgment grounded in critical professional reasoning and compassion, with implications for CALL-informed teacher education and practicum design.
Convince Luigi to put Pineapple on his Pizza: New Interactions for novel language use with AI #4685
The rapid proliferation of large language models (LLMs) has opened new horizons for personalized, AI‑augmented learning. The ability to design new kinds of educational interactions can provide language practice opportunities that previously did not exist. For example, the language of persuasion, AI chatbots offer the unique ability to craft interactions with a high degree of authenticity related to the social purpose of the interaction. Enter Luigi: a chatbot pizza lover who is vehemently opposed to the addition of pineapple on pizza. Your mission, should you choose to accept it, is to convince Luigi to at least try tasting pineapple on pizza (Hawaiian pizza). Students interacting with Luigi are provided with an interlocutory adversary who will not immediately acquiesce to the student's request, but rather will evenly provide opportunities to entire classes of students to use persuasive language. In this presentation, I will demonstrate how to achieve this, hosting LLMs locally using Ollama (no cloud services and free of cost), as well as how to download (pull) models and customize them to create unique learning interactions for educational use. I will also describe the difficulties encountered getting models to produce language that is appropriate for the task, level, and situation.
Investigating the Validity of Accessible Automated Pronunciation Assessment Using Classroom and Corpus Data #4686
Assessing pronunciation accuracy, fluency, and prosody is challenging due to substantial variability in human perceptions of speech production. Automated pronunciation assessment tools have therefore been proposed as scalable supports for both assessment and speaking development. Among these tools, Azure Pronunciation Assessment provides automated scoring across 33 languages at relatively low cost. This study examines the convergent, predictive, and construct validity of Azure’s measures of pronunciation accuracy, fluency, and prosody, with prosody operationalized according to Azure’s definition of naturalness in speech, including stress, intonation, speaking rate, and rhythm.
Analyses of approximately 3,510 speech samples from the ICNALE dataset show that all three measures are strongly associated with CEFR proficiency levels and rank among the strongest CEFR predictors when compared with established indices of lexical diversity and syntactic complexity. In addition, analyses of classroom speech data from 66 learners in Korea, Japan, and China reveal moderate to strong correlations between human ratings and Azure scores across all three constructs.
These findings suggest that Azure Pronunciation Assessment can provide valid, fine-grained feedback to support pronunciation-focused instruction and learning. However, the analyses rely on a fixed reference transcript (“Please Call Stella”), which may limit generalizability across task types, accents, and speaking contexts.
Identifying LLM-Generated Writing Through Authorship Familiarity #4687
Large language models (LLMs) have created new opportunities for writing support while simultaneously challenging the integrity of text-based educational assessment. Existing authorship verification methods, such as stylometric analyses and automated classifiers, provide probabilistic judgments that may allow plausible deniability for claimed authors. However, true authors are typically familiar with their text in ways that surrogate authors are not. Building on this insight, the present study introduces the Content Restoration Authorship Familiarity Test (CRAFT), which assesses authorship by asking claimed authors to recall and reconstruct elements of texts they identify as their own.
The CRAFT battery was piloted with 60 university students in Seoul. Participants wrote a 16-sentence handwritten text in class. An LLM then generated a second text based on that content. About 30 minutes later, participants completed two CRAFT tests, one for each text. In both texts, four sentences were inserted and five words replaced with synonyms, and participants attempted to identify or restore the original wording. Responses were scored on a 14-point rubric allowing partial credit for morphologically related forms.
Descriptive analyses showed non-overlapping performance distributions between human-authored and LLM-generated conditions, suggesting authorship familiarity can provide a reliable behavioral signal for distinguishing genuine authorship from AI-assisted text generation.
Creating Level Appropriate Materials by Training Open-Source Large Language Models #4688
Recently, many educators have been using generative AI to create materials for their courses. However, they often report frustration with the inability of Large Language Models (LLMs) to reliably produce language appropriate to their students’ proficiency levels. This raises the question: Can LLMs be adapted to generate level-appropriate learning materials consistently? This research project aims to develop an LLM capable of producing output at each CEFR level by taking into account vocabulary, grammatical features, and lexical complexity. Three open-source LLMs were selected and fine-tuned using datasets of level-differentiated CEFR texts. This presentation explains the fine-tuning process and compares the effects of different datasets on the three models. It will also introduce tools and workflows that allow participants to fine-tune models to suit their own need. Model outputs at each level will be evaluated against CEFR benchmarks, and the output of each model compared and shared with the participants for discussion. The presentation will conclude by considering how level-controlled AI-generated texts such as these can be integrated into courses, and the ongoing implications for material development.
Learning with AI: How AI tools influence learner autonomy in EFL contexts #4689
Artificial Intelligence (AI) has increasingly influenced various aspects of education, including the learning of English as a Foreign Language (EFL). With the increasing implementation of AI tools, such as ChatGPT, Chatbots, and Grammar Checkers, in EFL learning, learners’ engagement with language practice and self-directed study has evolved. This research explores the impact of AI applications on learner autonomy in EFL contexts, focusing on both their facilitating and constraining roles. The study was conducted with undergraduate EFL students at a Vietnamese university who regularly used AI-assisted tools to support their English learning. Based on a mixed-methods approach using data from questionnaires, learning logs, and semi-structured interviews, the results indicate that AI enhances learner autonomy by enabling independent practice, supporting self-assessment, and providing immediate feedback. However, the findings also suggest potential challenges, including over-reliance on AI-generated content and reduced critical engagement with learning tasks. In addition, issues related to students’ digital literacy and the responsible use of AI tools emerged during the research process. These results highlight the importance of guiding EFL learners to use AI tools critically and effectively, so that technology can support the development of sustainable and lifelong learner autonomy.
Is More Always Better? A Deep Dive into How App Mode Choice Impacts EFL Listening Comprehension and Recognition #4690
This study investigated the differential effects of mobile-assisted listening modalities—Unimodal (n=35) practicing listening comprehension MCQ exercises only, Bimodal (n=35) practicing caption/transcript tracking activities and/or full/partial transcript generation (dictation or fill-in-the-blank), and Multimodal (n=37) —on listening comprehension and recognition of Egyptian EFL sophomores. Employing a mixed-methods design, the research analyzed pre/posttest scores, using Two-way Mixed ANOVA and Linear Regression, complemented by inductive thematic analysis of perception surveys. Two parallel forms of modified Cambridge B1 PET listening pre/posttests were used (adding 6 fill-in-the-blank items on the audio in Part 4 to assess listening recognition, along with those already included in Part 3). Results indicated that all modalities were comparably effective in enhancing listening comprehension. However, significant performance variations emerged in listening recognition, with both the Unimodal and Multimodal groups surpassing the Bimodal group. This suggests that applications emphasizing comprehension exercises may be more effective in developing lower-level cognitive skills, such as listening recognition, than those focused solely on word recognition. Furthermore, listening recognition was a weak predictor of overall comprehension. Participant feedback reinforced these findings, advocating for the integration of personalized, gamified, and interactive features (e.g., shadowing, speed control) and a broader range of exercise formats beyond traditional multiple-choice and gap-filling.
LingoLesson: A Platform for Authentic Speaking Practice and Teacher-Led Assessment #4481
This presentation introduces LingoLesson, a new platform designed to address three pressing challenges in language education. First, as agentic AI browsers increasingly enable students to complete traditional blended-learning tasks with minimal engagement, LingoLesson preserves learner authenticity by requiring genuine spoken and recorded interaction. Second, because oral production normally leaves no record, it can be difficult for teachers to track progress, assess performance, or identify when students slip into their L1. LingoLesson solves this through video and audio submissions that are automatically transcribed and organized for easy review. Third, LingoLesson uses AI to assist rather than replace teacher decision-making, keeping educators at the center of lesson design and assessment. The session will demonstrate how the LingoLesson Editor allows teachers to create rich multimedia lessons, maintain full agency over sequencing and content, and convert speaking tasks into visible, trackable learning. Participants will gain insight into how the platform supports communicative competence, enhances classroom practice, and fits into existing curricula. In short, LingoLesson offers a practical, classroom-ready solution for modern language programs. Think of it as a cross between Flip and Google Forms, purpose-built for language learning.
Eigo.AI: Assisting and Enhancing Human Language Learning #4482
AI should assist and enhance human-centered language teaching and learning, not replace it. Established principles in second language acquisition remain central: learners need sustained exposure to comprehensible input, meaningful opportunities for output and interaction, structured fluency development, and timely feedback. Eigo.AI is designed around these principles. The platform offers a comprehensive library of AI-generated, human-proofread lessons across proficiency levels. Each lesson integrates listening, reading, speaking, and writing through seven structured activity types, ensuring balanced skill development. Students receive immediate feedback on pronunciation, writing, and discussion performance, while teachers maintain full oversight through detailed tracking tools.
Eigo.AI also addresses a practical constraint in most programs: time. Reaching advanced proficiency typically requires more than 2,500 hours of focused study, far beyond what classroom instruction alone can provide. The platform extends structured learning beyond class hours while keeping progress visible and measurable. Teachers can monitor engagement, review student output, and intervene when needed. In this way, AI supports informed instructional decisions without displacing teacher expertise. This session will demonstrate how Eigo.AI combines established pedagogy with practical implementation, offering institutions a scalable and classroom-ready solution that strengthens learner development and preserves teacher agency.
Redesigning Reading: Next-Generation Graded Readers for the Digital Classroom - Julian Warden #4483
English Central
With the prevalence of social media and short attention spans, young people are increasingly consuming information in bite-sized pieces, rather than engaging with longer texts, leading to shallower reading. Specifically designed and built by language experts to counter these issues and featuring over 1,700 animated stories aligned to CEFR, BOOKR integrates voice recognition, recommended reading lists and AI-generated assessment tools to promote student engagement, assess comprehension and provide a personalized learning experience at scale.
An AI Conversation Partner Integrated with Classroom Learning - Alan Schwartz #4484
This session introduces MiMi Chat, a Gen AI-powered tutor that serves as a conversation partner and provider of formative feedback and assessment, tightly integrated with classroom-based English language curricula. Now used at over 50 universities worldwide, MiMi gives students structured opportunities to practice speaking and receive real-time feedback aligned with CEFR “CAN-DO” goals—outside scheduled class time but directly connected to in-class instruction. Drawing on data from over 15,000 students, we examine measurable gains in speaking output and learner confidence. Case studies include use in a TED-based discussion course, a presentation course, a nursing communication module, and a cross-cultural communication program. We also share engagement metrics, feedback accuracy, and qualitative learner insights.
A One-stop AI-powered Solution for Personalized Cantonese Learning and 1-to-N Immersive Oral Training Integrated Systems #4485
The presentation is about an innovative initiative aimed at making Cantonese learning more accessible and effective for diverse learners in Hong Kong, the Greater Bay Area and worldwide. Integrating advanced AI technologies with user-friendly digital platforms, the project supports non-local and local students, professionals, and residents in developing Cantonese skills for academic, social, and professional success. The project comprises two main components: iCanLearn, an AI-driven self-learning platform tailored for native Putonghua speakers and other learners, offers interactive lessons on pronunciation, vocabulary, and grammar, with instant feedback and a smart chatbot for real-life conversational practice. CanNTalk, the second component, is an AI-powered oral training platform that simulates group discussions with multiple AI avatars, reflecting varied personalities and communication styles to mimic authentic social and workplace scenarios. This immersive approach builds learners’ confidence and fluency in spoken Cantonese. It is expected that these platforms form a “learn-and-train” ecosystem, enabling seamless transitions between learning and practice, while providing teachers with insights for personalized support. The flexible design allows use in universities, corporate training, and by the general public, amplifying its positive impact across the community.
Supporting Inclusive Learning Through UDL-Informed Digital Storytelling Practices #4486
In today’s AI-driven era, English Language Learners (ELLs) must navigate vast amounts of digital content, including fake news and deepfakes. Consequently, English as an Additional Language (EAL) educators need to integrate activities that foster learners’ critical thinking and digital literacy while ensuring instruction is inclusive and accessible, particularly for neurodivergent students. Grounded in the Universal Design for Learning (UDL) framework, this presentation reports findings from a qualitative case study examining the pedagogical value of project-based digital storytelling in a Japanese university communicative EAL course. The study explores how viral marketing videos, AI-supported kamishibai picture-card storytelling, and storyboarding were used to scaffold learning and prepare participants (n=14) to create collaborative, socially conscious digital narratives. These multimodal activities enabled students to critically examine local and global sociocultural issues in an authentic language-learning context. Drawing on UDL principles of multiple means of engagement, representation, and action and expression, the presentation highlights the design of instructional materials and learning tasks. Data sources included a post-project questionnaire, focus group interview, and classroom observations. Findings suggest that UDL-informed digital storytelling can enhance ELLs’ critical thinking, creativity, and digital literacies, while also revealing challenges related to time constraints and group dynamics.
Preserving the Future of Tradition: AI-Mediated Kodan Storytelling for Intercultural English Education #4487
At the National Institute of Technology, Hakodate College, a student-led VR Lab project integrated VR and AI tools into a three-day international VR Camp held in March 2026. Participants from multiple countries joined remotely to engage in English-based engineering and cultural learning activities.
This presentation focuses on Day 2 of the program, a collaboration with Ichinoseki College, and professional Kōdan storyteller Kinme Chikufutei. The project was guided by the conceptual framework KATARIUM, proposed by the collaborating teacher, which combines kataru (“to narrate”) with the idea of a museum or medium, framing storytelling as a living, reusable cultural resource rather than a fixed performance.
On-site recordings were conducted using Polycam and spatial recording tools within the ENGAGE VR platform to create Gaussian models of key performance elements, including the shakudai (storytelling table). These assets were reconstructed in a culturally appropriate tatami-stage environment.
To support accessibility for international learners, AI-assisted English voice dubbing preserved vocal quality and narrative rhythm while intentionally retaining a non-native English accent. An interactive AI avatar enabled learners to ask questions related to key vocabulary, historical background, and social context. The presentation outlines a replicable, teacher-centered workflow for sustaining cultural heritage through English education.
From Overwhelm to Workflow: Digital Tools for Productivity #4489
Digital productivity tools are increasingly promoted as cure-alls to heavy research and teaching workloads that educators face, yet practical integration often involves experimentation, setbacks, and adaptation. In this presentation, I introduce four digital tools used in my own academic workflow: Focusmate for structured accountability, Scrivener for managing long-form writing projects, Elicit.com for AI-assisted literature exploration, and Headspace as a supportive practice for sustaining focus and well-being. Drawing on reflective classroom and research experiences, I examine what has worked, what has failed, and what required rethinking and tinkering when incorporating these tools into daily professional practice. While some tools significantly improved writing consistency and research efficiency, others revealed limitations. Rather than promoting technology as a universal panacea, I emphasize thoughtful, deliberate, and critical adoption of digital tools. Participants will gain concrete strategies for experimenting with free productivity technologies, insights into common pitfalls, and practical suggestions for building sustainable digital workflows that support both teaching and research. I will encourage participants to reflect on their own productivity challenges and to approach technology integration as an evolving process of trial, reflection, and refinement.
Rethinking assessments - the need for alternative assessments #4490
This presentation outlines the various difficulties educators may face when designing assessments in the age of GenAI (i.e work can be created by GenAI) and how educators may react by placing restrictions (resorting to pen and paper exams) or penalties (punishing students who use GenAI) in response to these difficulties. This presentation will argue that these approaches may act as temporary band-aids but ultimately might not be addressing the problems at hand or be beneficial for student learning.
The presentation will share experiences from redesigning assessments for an English ‘writing for communication’ course at a Hong Kong university. Given that writing tasks are most susceptible to GenAI, an alternative approach was needed to try to measure student learning from the course. This meant that changes had to be made both to the content and assessments to still make writing feel “relevant” to students in a GenAI world.
The presentation will share what these changes were and what the actual assessments were. By sharing this experience, other educators may start to think about their own approaches to assessment and whether traditional forms are still applicable. The presentation will end with broader pedagogical and practical suggestions that educators may find helpful.
When Knowing the Words Isn't Enough Pronunciation Features and TOEIC Listening #4491
Many first-year university students preparing for TOEIC report they "know the words but cannot hear them," suggesting a gap between lexical knowledge and spoken-word recognition (Field, 2008). This presentation describes how technology can be used in the classroom to help learners connect pronunciation features to listening performance.
Each week, three classes of students (n = 90) follow a short cycle (10 minutes): (1) identify segments of TOEIC audio they find hard to hear, (2) label the likely phonological source (e.g., reduction, linking, flap /t/, weak forms), (3) practise shadowing with that feature, and (4) submit recordings of themselves reading sentences with these forms using the LMS. This approach builds upon previous uses of shadowing (Hamada, 2016) by leveraging ASR-based pronunciation tools integrated into LMSs, such as Microsoft Teams Reading Progress (Molenda & Grabarczyk, 2022).
Analytics tracked include submission frequency, feature tags selected, and pre/post listening checks. Preliminary findings suggest (a) listening breakdowns can often be attributed to specific sound changes, (b) micro-shadowing routines strengthen both confidence and anticipation of connected speech patterns, and (c) the cycle is feasible without reducing core TOEIC practice time. The session will also discuss how teachers can integrate these cycles into their own classes.
From Templates to Creativity: Developing Deeper Canva Skills in EFL Contexts #4493
Canva has become a widely used digital tool in educational contexts due to its accessibility and attractive ready-made templates. However, many teachers and students use Canva only at a surface level, relying heavily on standard designs without fully exploring its creative and pedagogical potential. This presentation introduces practical ways Canva can be used within an institution and EFL classrooms, focusing on how both teachers and students can extend their skills beyond basic template use. Examples include the creation of teaching materials, promotional resources, and social media graphics at the departmental level, as well as student-generated posters, presentations, and short videos in classroom settings. By providing targeted guidance and encouraging experimentation, Canva can become a powerful tool for fostering originality, engagement, and learner autonomy. This presentation emphasizes how small shifts in training and task design can help users move from passive template selection to active content creation. Attendees will gain ideas for helping both teachers and students to efficiently produce visually engaging materials that reflect their own creativity while supporting language learning objectives.
AI-Mediated Feedback and Draft Revision: A Classroom-Based Study of IELTS Task 1 Writing #4495
This classroom-based study examines the effectiveness of AI-mediated feedback in supporting draft revision in IELTS Academic Writing Task 1 (table description). The study aims to determine whether structured ChatGPT-assisted revision leads to measurable improvements in grammatical accuracy, lexical resource, and the use of linking words. Fifty third-year English-major students produced an initial draft independently and subsequently revised their texts using guided prompts to obtain AI-generated feedback. The study adopted a within-subjects design. Draft 1 and Draft 2 were compared across grammatical accuracy, lexical resource, and linking devices. A post-task questionnaire was used to explore students’ perceptions of AI-supported revision, including perceived usefulness, confidence, and concerns about over-reliance. Quantitative findings indicate reductions in grammatical errors and increased lexical variation and use of linking expressions in revised drafts. Survey responses suggest improved confidence in language use, although some participants reported concerns about potential overreliance on AI suggestions. The study highlights the pedagogical potential of AI-mediated feedback while emphasizing the need for guided use and feedback literacy in CALL-based writing instruction.
Writing Fast, Revising Smart: Learner Experiences with AI-Generated Written Corrective Feedback #4498
This study examines the pedagogical use of artificial intelligence (AI)–generated written corrective feedback (WCF) in a first-year university writing course at a private university in Japan. Conducted over a 14-week term with three classes of first-year students (n = 62), the study implemented an eight-week intervention in which learners completed three-minute speed writing tasks followed by AI-generated WCF provided by ChatGPT to support self-correction and revision. Using a mixed-methods design, quantitative data were drawn from analyses of learner writing samples, focusing on error types, frequency, and AI-generated corrections. Qualitative data were collected through a post-intervention survey examining learners’ perceptions of the usefulness of AI-generated WCF, their self-reported proficiency using AI tools, and attitudes toward AI integration in writing courses. The study addressed two research questions: (1) Did AI-generated WCF raise learners’ language awareness in writing? and (2) What are learners’ perceptions of AI use in university writing courses? Findings indicate that repeated interaction with AI-generated feedback can enhance learners’ noticing of linguistic form and accuracy, positioning AI as a language awareness raising tool rather than solely an error corrector. However, effectiveness depends on the development of learner AI literacy, particularly for autonomous use inside and outside the classroom.
Overcoming the GenAI Capability Overhang: Building Agentic Tools for Applied Linguistics Research #4499
In a recent talk, Microsoft CTO Kevin Scott (2025) discussed "capability overhang," the gap between what AI models can accomplish and what users actually implement. This issue is present in applied linguistics, where frontier models have demonstrated remarkable potential, yet most researchers remain limited to basic chatbot interactions or structured platforms like SciSpace. This presentation argues that agentic tools like MCP (Model Context Protocol) servers, customizable skills, and adaptive agents enable researchers to develop workflows tailored to their needs, bridging this capability gap.
This presentation will demonstrate how to build agentic research workflows for applied linguistics using tools such as Claude Desktop, Notion, custom MCP configurations, and agentic platforms. Specifically, I will show how researchers can leverage these tools to: (1) discover and retrieve relevant literature across multiple databases, (2) annotate and synthesise findings with persistent memory systems, (3) store and curate research in structured knowledge bases, and (4) streamline data analysis pipelines.
Rather than replacing expertise, this presentation shows how agentic approaches amplify it by streamlining repetitive tasks while preserving human judgment. Attendees will leave with concrete strategies for building personalised AI research assistants that can evolve alongside their projects, ultimately simplifying the path from data collection to publication.
Identifying Pedagogically Grounded AI Integration Points in a University English Program #4501
This practice-oriented presentation reports on an exploratory investigation into integrating artificial intelligence (AI) within an English for General Academic Purposes (EGAP) program at a Japanese university. To enhance the program through AI adoption, the project examines where and how AI might provide pedagogical learner support, such as communication practice and formative feedback, while remaining aligned with the program’s objectives. The study draws on mixed data sources, including survey responses from over 1,000 undergraduate students and input from EGAP instructors, to identify perceptions related to AI use. Findings indicate student interest in AI for language support and idea development, alongside concerns about appropriate academic use and challenges in critically evaluating AI-generated output. Instructors expressed optimism regarding pedagogical efficacy and support and highlighted concerns related to student creativity and ethical use. In addition, a pilot study combining instructional videos with an AI chatbot was implemented to trial one possible integration point: extending listening and speaking practice beyond class time. While engagement levels were high, the results underscored the need for explicit AI literacy instruction, careful task design, and pedagogical scaffolding. The presentation outlines program design considerations and decision-making principles for AI integration in EGAP contexts, emphasizing teacher-led reflective engagement over technological dependence.
Gated AI in the EFL Classroom: Applying the PAIR Framework Through Design-Based Research #4502
This session presents a design-based research (DBR) study of an AI-integrated financial literacy project in required EFL classes at a Japanese university. In the initial 2025 design cycle, students learned the basics of financial planning through a case-study approach. The initial instruction was technology-free, but students were granted unrestricted access to AI tools for the final task. Although pre- and post-survey data showed improvements in financial knowledge, behavior, and attitudes, classroom observation revealed that many students were reproducing AI-generated content without critical reflection. In response, the 2026 design cycle will include a gated AI model utilizing the PAIR framework (Prompt, Assess, Iterate, Reflect) and Task-Based Language Learning. During the pre-task phase, students will develop content and target language knowledge through a guided case study. AI is then introduced as a mediated learning tool to evaluate, revise, and justify its use in additional case studies that students will complete in small groups. This session shows how a DBR approach can support the structured integration of AI in EFL contexts to promote language development, critical thinking, and technological literacy.
Cooperative, competitive, or individualistic learning? Exploring the impact of different gamified conditions on Japanese EFL learners’ achievement and motivation #4504
The role of digital technologies in language education has expanded considerably in recent years. Gamified language learning has been promoted as an innovative pedagogical approach, particularly for its potential to enhance learner motivation and engagement through interactive learning experiences. However, comparative analyses of different gamified learning conditions remain underexplored in empirical research, with none focusing on Japanese EFL learners. Grounded in social interdependence theory and sociocultural theory, this study explores the effectiveness of different social structures on students’ motivation and learning achievement in gamified contexts. Sixty Japanese undergraduate students participated in a 14-week intervention using Quizlet. Questionnaires and pre-posttests measured participants’ learning gains and motivation, while interviews further explored their perceptions of learning experiences. Preliminary findings suggest that students in the cooperative condition significantly outperformed their peers in the posttest and showed the highest motivational intensity. Since data analysis is ongoing, implications and additional findings will be presented at the conference. Aligning with JALTCALL’s theme, rather than treating gamification as an inherently effective approach, the study investigates the conditions under which gamified environments support or hinder sustained motivation and learning gains. This offers a balanced reflection on both the pedagogical potential and limitations of gamification in technology-enhanced language learning.
Developing a Reliable Listening Placement Test with BookWidgets: A Mixed-Methods Approach #4505
This presentation outlines a practical process for designing institution-specific listening tests by combining statistical analysis with qualitative instructor feedback. The presenters were tasked with creating a listening test to re-stream approximately 800 first-year students into appropriate second-year course tiers, accommodating a wide range of English proficiency levels.
The development process consisted of five stages: creating and recording initial test items, selecting a content creation and assessment platform, trialling draft materials with instructors, administering two pilot versions to current second-year students, and conducting a classical test theory (CTT) analysis of both pilots to guide item selection for the final test.
The presentation explains each stage in detail, including the rationale for adopting BookWidgets as the test creation platform, the role of AI in early item generation, instructor feedback on test clarity and difficulty, and the decision to use teacher voice actors rather than AI-generated voices to enhance authenticity. Results from the CTT analysis will be shared. Attendees will leave with practical guidelines and replicable steps for developing reliable listening assessments suited to their own contexts.
Exploring the Impact of H5P on Learning APA Website Citations in L2 Classrooms #4506
In academic English courses, learners are often required to research and cite their sources appropriately. Teaching how to cite sources can be challenging for teachers. This blended research and practice-based study explores how H5P, a plug-in, supports learning APA style website citations. While previous studies reported that H5P can enhance learners' engagement and motivation, evidence on learning outcomes and instructional design is limited. Addressing this gap, the present study aims to explore how H5P supports learners' engagement and understanding of citation rules. A mixed-method approach was adopted to collect data from quiz performances and learners' feedback surveys. Seventy university students participated in a flipped learning activity by watching an H5P interactive video. The findings suggest that H5P can support learners’ engagement and understanding of website citation rules, but pedagogical impact requires careful design choices. This presentation will introduce H5P interactive videos as a tool for teaching APA citations in the L2 classroom. Participants will learn how the instructional design facilitated or hindered learning, as well as the challenges encountered during implementation. This presentation will contribute to CALL by providing insights into H5P, an underexplored tool with some implications for future research.
Fostering Intercultural Learning Through COIL #4507
Collaborative Online International Learning (COIL) has emerged as a sustainable alternative for developing global and linguistic competencies (Rubin, 2017), particularly in higher education contexts to promote intercultural competencies. This presentation outlines the design and implementation of a small-scale COIL program linking a fourth-year seminar class at a women’s university in Japan with TESOL practicum students at a university in the United States. Informed by translanguaging pedagogy (García & Kleyn, 2016), the initiative positioned participants within a collaborative learning community focused on shared inquiry into sociocultural issues through bilingual, synchronous online interaction. Data from post-program surveys and focus groups (N = 14) were analyzed to examine perceptions of intercultural engagement, language use, and professional development. Findings indicate that Japanese students developed increased intercultural self-awareness, greater confidence in expressing ideas, and enhanced metalinguistic sensitivity. Translanguaging practices reduced communication anxiety and supported deeper cognitive engagement, particularly during comparative discussions aligning with models of intercultural communicative competence (Byram, 1997; Deardorff, 2006). U.S. TESOL students reported professional growth, increased instructional confidence, pedagogical adaptability, and awareness of EFL classroom dynamics. Overall, the presentation demonstrates that small-scale COIL initiatives supported by flexible bilingual practices foster language development, intercultural learning, and teacher preparation beyond physical mobility.
AI Literacy as Situated Practice in EFL Academic Writing #4508
As generative AI becomes increasingly present in language classrooms, AI literacy in English language education is often framed in terms of technical skills, tool use, or academic integrity, implicitly positioning AI use in terms of success or failure. This study adopts a practice-oriented perspective to examine how EFL students engage in academic writing when composing with generative AI in classroom-based CALL contexts. Drawing on qualitative data from approximately 90–100 undergraduate students, the study analyzes students’ prompts, paired non-AI and AI-assisted drafts, and observable writing decisions in introduction and conclusion tasks embedded in regular academic writing coursework. Rather than evaluating writing quality or AI performance, the analysis focuses on how students shape AI assistance, negotiate control over writing decisions, and respond when AI support aligns—or fails to align—with their writing intentions. Using an inductive–abductive analytic approach, the findings identify recurring patterns of appropriation, modification, and resistance to AI-generated content, suggesting that what counts as “successful” AI use emerges through situated writing practices rather than technological outcomes alone, with implications for CALL pedagogy and writing instruction.
Learner Voices on Generative AI Use in EFL Classrooms: Perceived Benefits, Concerns, and Curriculum Integration #4510
The purpose of this study is to explore the perspectives of Japanese university EFL learners on the use of generative AI tools, such as ChatGPT. Focusing on learner voices, it examines the benefits, concerns, and attitudes toward integrating generative AI into university English curricula. The study employed a mixed-methods survey design. A total of 400 first- and second-year engineering students at a Japanese university participated in the study. Quantitative items were used to measure AI-mediated learner autonomy, AI self-efficacy, and psychological dependence. In addition, open-ended questions were used to investigate EFL learners’ perceptions of the benefits and challenges of using generative AI tools and their potential integration into EFL curricula. The findings show that students recognize benefits such as increased efficiency and support for idea generation, while also expressing concerns about over-reliance and reduced learner effort. Attitudes toward curriculum integration varied depending on learners’ autonomy and dependence profiles. These results highlight the importance of pedagogical guidance in shaping the educational value of generative AI use in EFL classrooms.
Completing ≠ Learning: AI, Minimal Effort, and Shallow Responses to an Online Self-Access Task Sheet #4512
This practice-oriented presentation examines the frequency and depth of learners' responses to an online self-access task sheet at one private Japanese university. The sheet structures students' self-access work into three stages: preparation, task completion, and reflection. Designed for use in a self-access centre, it prompts students to log task goal(s) and planned resources beforehand, describe their session task, and reflect on outcomes, challenges, and next steps. Over a semester, usage logs and response data of 150 English majors were analysed using a mixed-methods approach for two questions: How often do students engage with the task sheet, and to what extent do responses—preparation plans, task descriptions, and reflections—move beyond brief/minimal entries toward detailed, specific, future-oriented content? Quantitative measures included submission counts per student and average response length per section. Qualitative coding assessed depth (superficial vs. detailed) and specificity (vague vs. concrete strategies/plans) in a stratified sample of 100 submissions using a simple three-level rubric. Preliminary analysis revealed high completion rates, but consistent lack of depth. This ranged from minimal effort responses and AI-generated content to vaguely/superficially action-oriented reflections. The presentation concludes by exploring reasons for these patterns and measures being considered moving forward.
Human Instruction in the Age of Generative AI: Navigating Academic Integrity and Guidance Gaps in Academic Writing #4513
Generative AI (GenAI) is increasingly used in academic writing (Eleftheriou et al., 2025). Recent studies have examined how GenAI can be incorporated to support students at multiple stages of the writing process and to provide feedback (e.g. Allen & Mizumoto). While these tools offer promising opportunities, they also raise concerns and challenges, particularly regarding academic integrity. Despite growing research in this area, the role of human instructors in academic writing education in the era of GenAI remains underexplored. This study examines the uniqueness of human instruction by investigating the challenges that have emerged with the introduction of GenAI in academic writing and the strategies tutors use to address them during one-on-one sessions. Using a qualitative design, the study employs semi-structured interviews with eight tutors at an academic writing center at a Japanese university. Preliminary findings indicate that ethical concerns and a lack of clear guidance are the primary challenges tutors encounter when students use GenAI in their writing. Follow-up interviews will be conducted to identify emerging challenges and changes in instructional strategies over time. By foregrounding instructors’ experiences, this study aims to provide insights into future directions for human instruction in academic writing and more effective support for students.
AI-Mediated Feedback in L2 Writing: Evidence from a Classroom-Based Complexity Study #4514
Interest in generative AI has prompted debate about its role in second language (L2) writing instruction, particularly in relation to feedback practices. While prior research has focused on learner perceptions (Sun & Mizumoto, 2025) or global writing quality (Muñoz, Nassaji, & Bello Carrillo, 2025), relatively few classroom-based studies have examined how AI-mediated feedback affects specific dimensions of syntactic development.
This study reports findings from a 15-week quasi-experimental study conducted in a Japanese university EFL writing course, focusing on sentence-level syntactic complexity. Students completed timed writing tasks on identical topics throughout the semester and were assigned to one of four conditions: (1) AI-assisted revision using a custom-designed feedback tool, (2) instructor-provided feedback, (3) self-correction, or (4) a no-feedback control group. Syntactic complexity was measured using established indices, including mean length of sentence (MLS), mean length of T-unit (MLT), and mean length of clause (MLC).
Results showed a significant interaction between time and group for MLT, with the AI-assisted group demonstrating greater gains in sentence-level elaboration than the instructor-feedback and control groups. No comparable advantages were observed for subordination-based measures. These findings suggest that AI-mediated feedback can function as a targeted post-instructional scaffold supporting specific dimensions of syntactic development in L2 writing.
The Hidden Social Work of Chatbots #4515
As AI chatbots become more common in language classrooms, their impact is often discussed in terms of accuracy or efficiency. Less attention has been paid to the social work these systems perform during interaction. This presentation explores how classroom chatbots, when designed as peer-like conversational partners, appear to shape student risk-taking, voice, and participation in ways that differ from typical peer discussion.
Drawing on a year-long classroom implementation with Japanese university learners, I present observations from persona-based AI chatbots used to support English communication. Examples from chat logs, student reflections, and classroom practice suggest that students often experience these interactions as less face-threatening and more generative than peer interaction, particularly during early idea exploration. These patterns align with long-standing SLA concerns regarding affective barriers, classroom silence, and reluctance to take communicative risks. (Harumi, 2011; Curry & Peeters, 2025)
To interpret these patterns, I introduce AI Pragmatic Mediation (APM), a framework for examining how AI systems influence user stance and communicative choices, and Dyadence, a two-phase human-AI co-thinking process involving exploratory dialogue followed by synthesis. Rather than treating chatbots as neutral tools, these frameworks offer lenses for understanding how interactional design may shape learner engagement.
Regulating the Use of AI and Translation Tools in a Secondary EFL Classroom #4517
As translation and generative AI tools become increasingly accessible, many EFL learners using these technologies produce linguistically advanced output that exceeds their actual level of comprehension and communicative control. In response to students using translation tools to write speeches in a Japanese secondary EFL course, and later using generative AI to write essays in an intensive EFL writing course, this practice-oriented presentation presents a practical classroom intervention activity designed to regulate technology use and address student motivation. First, the presentation outlines methods for identifying AI-generated work in student writing. Then, drawing on the Extensive Reading Foundation’s levels of comprehensive input, the in-class activity simulates language comprehension levels and guides students to realize how unrestricted access to digital tools can push learners into an unproductive “Pain Zone” of language learning. Rather than discouraging technology use, the activity aims to curb a reliance on digital tools while advising students to prioritize student-generated language, particularly in preparation for high-stakes university entrance exams. A comparison of student writing samples before and after the intervention activity suggests a reduced reliance on direct replication from AI. The session concludes by discussing implications for mediating AI integration in exam-oriented EFL contexts.
Comparative Study on University Level and Student Perception of AI Usage #4521
The advancement of artificial intelligence (AI) has complicated students’ learning, especially in CALL. With students’ usages of AI often preceding proper guidance being provided, it is now crucial for teachers to understand students’ perception towards AI use to better set AI policies accordingly. This presentation gives information about how students from two universities with different academic levels (hensachi) perceive the use of AI in English language classes and assignments based on research conducted earlier this year. The results show the students’ similar willingness to use AI in their coursework and assignments, regardless of the level of university. It is also notable that most participants have positive perceptions of using AI for some uses, such as vocabulary learning, creating practice questions, and speaking practice, while they are more prone to negative perceptions towards creating sentences for writing assignments. Utilizing the research results, this presentation also provides suggestions on how teachers can present and negotiate their AI policies with students in a convincing and satisfying manner, benefiting both teachers and students, and helping teachers prevail in this uncertain era of AI.
AI as a Communicative Resource: Japanese Business Professionals’ Use of AI in English-Mediated Work #4522
This study reports on a survey-based investigation into how Japanese business professionals use AI-assisted language tools (e.g., ChatGPT and DeepL) in English-mediated workplace communication. While Computer-Assisted Language Learning (CALL) research has traditionally focused on classroom-based language development, less attention has been paid to the role of AI in real-world professional communication contexts.
Drawing on the frameworks of English as a Business Lingua Franca (BELF) and multimodality, this study conceptualizes AI not simply as a learning support tool but as an integral communicative resource embedded in everyday business practices. Data were collected from approximately 100 Japanese professionals across a range of industries through an online survey examining purposes of AI use, perceived benefits, and impacts on English communication.
Preliminary findings indicate that participants use AI tools not only to improve linguistic accuracy but also for idea generation, pragmatic adjustment, and confidence-building. These patterns suggest that AI functions as a mode of communication and a partner in meaning-making rather than merely a correction tool.
The presentation discusses implications for CALL research and business English pedagogy, arguing for an expanded understanding of CALL that incorporates AI-mediated professional communication beyond educational settings, and outlining directions for future research and classroom-informed innovation and practice worldwide.
Developing AI-Enhanced Language Learning Web Applications Through No-Code Platforms #4523
This presentation introduces AI-supported web applications developed through no-code platforms to enhance English language learners’ productive and receptive skills. No-code environments enable educators and researchers to design and deploy interactive applications without programming expertise, opening new possibilities for computer-assisted language learning (CALL) and meaningful AI integration into pedagogical contexts. The project utilizes Vercel’s v0 platform and Google AI Studio for rapid prototyping of AI-powered learning applications. The current prototype allows learners to summarize or express opinions on short news articles through speech or text input. Additionally, the project includes developing a web application for conversation practice with an AI chatbot on specific news articles with feedback. AI automatically evaluates learner’s responses based on semantic relevance, lexical and grammatical accuracy, and—for audio input—pronunciation and fluency, providing individualized feedback to enhance their comprehension and production abilities. Key objectives include exploring how AI feedback can complement traditional classroom instruction through timely, adaptive responses. The development process demonstrates how no-code tools enable educators to focus on instructional design and assessment logic rather than programming syntax, democratizing access to advanced educational technology. The presentation addresses critical implementation challenges including data security, model accuracy, and feedback reliability.
When Virtual Collaboration Prevails and Fails #4525
This presentation examines where digital tools succeeded and failed in facilitating a COIL program. Reflective reports from 40 students across three Japanese universities (n=23), one German university (n=10), and one Ukrainian university (n=7) were thematically analyzed. This was part of a broader COIL project including additional Japanese universities and institutions in Indonesia and the Philippines. Students collaborated via Zoom, social media (WhatsApp, Instagram), and Google Docs. Despite virtual interaction, the majority demonstrated measurable intercultural growth using Bennett's DMIS framework, with technology enabling participation. However, some struggled with asynchronous platforms, experiencing failed group chats, coordination gaps, and reduced emotional connection compared to in-person interaction. Almost all students identified English proficiency as the primary barrier, but manifestations of this barrier varied across cultures in technology-mediated contexts. Ukrainian students transformed fear of imperfect digital English into confident expression. Japanese students’ high-context communication norms clashed with direct digital communication, yet similarly, they developed English confidence. German students navigated between their low-context directness and accommodating partners from high-context cultures, demonstrating adaptive strategies. This study argues that digital COIL design must address how cultural patterns intersect with providing differentiated support and platforms, while recognizing that CALL competence encompasses cultural-technological adaptation alongside linguistic proficiency.
Helping Learners Notice Their Speech Through AI Video-Synchronized Feedback #4526
When practicing speaking with a practice partner, language learners rarely remember what they actually said, or how they said it, after a session ends. Without this recall, feedback loses context. To address this, the researchers developed Pecha, a web application that records learners speaking via webcam, transcribes their speech with speaker diarization, and generates AI feedback across customizable categories like grammar and cohesion. What sets Pecha apart is that feedback links directly to video timestamps, so learners see replays of the exact moments errors occur. Combined with both written and spoken advice, learners are able to reflect on and improve their speaking skills. This design draws on Schmidt's Noticing Hypothesis, which holds that learners must consciously attend to linguistic features for acquisition to occur. Timestamp-synchronized feedback makes errors salient in a way that delayed correction cannot. The approach also aligns with Video-Stimulated Recall methodology, where reviewing recorded performance promotes deeper reflection and self-correction. This session will demonstrate the application, discuss its theoretical basis, and share early observations from use at universities in Japan and the United States, with students practicing English and Japanese respectively. Attendees will leave with practical ideas for integrating AI-assisted feedback into autonomous speaking practice.
An Interactional Study of Student-Student Rapport Construction in a Zoom Course on EFL Small Talk #4527
Rapport plays a central role in fostering engagement, participation, and interpersonal alignment in English as a Foreign Language (EFL) interaction, yet student-student rapport in online settings remains underexplored. Drawing on Spencer-Oatey’s (2000) Rapport Management Model (RMM), this study investigates how Japanese university EFL learners construct rapport during Zoom-based small-talk interactions, with a particular focus on the interactional strategies used across verbal, paralinguistic, and embodied nonverbal domains. Using video-recorded dyadic Zoom interactions, the study adopts a qualitative, micro-analytic approach to capture interactional strategies for rapport development across complete interactional episodes. A stratified purposive sample of dyads was analyzed to explore patterns of rapport construction and potential gender-based differences in interactional practices. The study offers pedagogical insights into how rapport can be systematically analyzed in online EFL interaction. By linking theoretical constructs to observable interactional practices, the findings inform task design, teacher mediation, and the development of interactional competence in digitally mediated language learning contexts.
How slowing down my teaching helped me go from failing to prevailing #4528
This presentation reflects on a pedagogical failure that ultimately reshaped my approach to game-based language teaching. In an early attempt to integrate games through a rushed, flipped-learning model, I treated play as the core in-class activity with minimal scaffolding, and an overreliance on students doing the bulk of work outside class. While technologically convenient, this approach resulted in shallow engagement, fragmented learning, and teacher frustration. Drawing on my earlier work conceptualizing ludic language pedagogy through the metaphor of *vaporwave* (slow, reflective, and aesthetically intentional) this talk traces how abandoning speed and efficiency became the turning point. By slowing down, expanding pre- and post-game pedagogical framing, and legitimizing languaging practices such as L1 use and reflection, games shifted from "maximum student talk time" to meaningful learning spaces. Framed candidly as a journey from failure to recovery, this presentation highlights practical lessons for teachers navigating educational technologies: when innovation prioritizes pace over pedagogy, learning suffers, but when we slow down deliberately, both teachers and students can prevail.
From Blank Page to Complete Course: Designing Syllabi and Language Programs with AI #4529
Designing a coherent language course—whether for a semester, term, or short unit—requires time, expertise, and careful alignment between outcomes, materials, and assessment. This presentation demonstrates how generative AI tools such as ChatGPT can be used as a practical curriculum design assistant to build an entire language course from scratch.
Participants will see a flexible, start-to-finish workflow that works across educational contexts, including universities, secondary schools, and elementary classrooms. The session covers how to generate a course or unit outline; control and grade the language level of materials; design integrated activities for all four language skills; create active learning and critical-thinking tasks; and produce classroom-ready resources that are fully downloadable in Microsoft Word format.
Rather than treating AI as an automated shortcut, the presentation emphasizes explicit, teacher-driven prompting that embeds pedagogical intent, age appropriateness, and proficiency constraints. Attendees will leave with transferable prompting strategies, practical examples, and a clear understanding of how AI can support efficient, high-quality course design while maintaining professional judgment and instructional control.
Global Englishes in the EFL Classroom: Using Text-to-Speech Technology to Create Listening Materials #4530
Integrating Global Englishes into mainstream EFL classrooms remains challenging, particularly in Japan, where textbooks and listening materials still overwhelmingly privilege native-speaker varieties. For educators, this barrier prevents student exposure to audio resources representing diverse English users in a pedagogically appropriate way. This presentation introduces a classroom-based approach using text-to-speech (TTS) technology to address this issue. Drawing on classroom practice at a Japanese university, the presenter demonstrates how Amazon Polly and Narakeet were used to create customised listening materials featuring a range of Global Englishes accents aligned with curricular and course assessment requirements. The TTS-generated audio was embedded into textbook listening activities and supported through scaffolding techniques such as guided noticing tasks and pre-listening discussion. This approach allowed lower proficiency students to engage with English varieties without increasing cognitive load. The presentation will feature example materials, classroom tasks, and student responses, highlighting how TTS can support exposure to English diversity while remaining practical and time-efficient for teachers. Participants will leave with ideas on how to use TTS tools to create listening materials in their own teaching contexts. The session concludes with a discussion of the pedagogical benefits, current limitations, and ethical considerations of using synthetic voices for Global Englishes input.
Positive Psychology–Based Custom GPT Coaching for English Learning and Well-Being: A Qualitative Pilot with Japanese University EFL Students #4532
This qualitative pilot study examines a researcher-developed custom GPT, the “English & Well-Being Coach,” designed to extend positive psychology learning beyond class for Japanese university EFL students. Grounded in Seligman’s PERMA framework (Positive Emotion, Engagement, Relationships, Meaning, Accomplishment), the coach prompts weekly reflection on one PERMA element and responds in CEFR A2–B1 English with brief, empathetic mentor-style guidance (validation, strengths-focused reframing, follow-up questions, and encouragement), ending with a short summary and targeted English feedback. The study aims to increase students’ everyday exposure to PERMA concepts for reframing school/life challenges while providing low-stakes opportunities for meaningful English self-expression. Over a five-week cycle (P–E–R–M–A), participants completed one coaching chat per week and provided transcripts of their conversations. Additional data include a pre-survey on prior AI use/expectations and a post-survey on perceived effectiveness and areas for improvement. The presentation reports students’ perceptions of coaching support, the role of PERMA prompts in shaping reflection, and the perceived usefulness of end-of-chat language feedback, with implications for future CALL research.
Layering Digital Communication onto Analog Work: Supporting Student Collaboration Across Institutions #4534
This presentation reports on the evolution of a multi-university collaborative learning project that integrates analog and digital tools to support student collaboration. Building on previous student engagement projects, the initiative has expanded to include additional institutions, introducing new challenges related to initiating collaboration among students who do not share a common class, instructor, or institution.
Earlier iterations of the project demonstrated that analog tools, such as instant photography, effectively fostered student engagement and reduced feelings of detachment common in digital exchanges. However, while successful in promoting interaction, the logistics proved difficult to enact further collaboration in a timely manner. In response, the current phase will build a digital layer of communication on top of the analog student work to improve the speed and volume of exchange.
Students now collaborate on presentation-based projects using Google Vids, with completed media shared via the Padlet platform. This allows participants to asynchronously view and comment on peers’ work. This study examines how combining analog and digital tools influence collaboration, participation patterns, and learner interaction. These findings should be of interest to instructors concerned about digital fatigue and who desire to increase meaningful student engagement.
Roles of Task Complexity and Language Proficiency in AI Chatbot-Mediated English Speaking Task #4535
Previous research on technology-mediated task-based language teaching has primarily examined how technological tools and task design influence English speaking performance. However, limited attention has been given to how varying levels of task complexity interact with learners’ language proficiency. Accordingly, this study investigates the effects of task complexity and language proficiency on English speaking performance, speaking anxiety, and willingness to communicate among Taiwanese undergraduate students in AI chatbot-mediated speaking tasks. A total of 160 undergraduates participated. Eighty students with low English proficiency were randomly assigned to either a low-proficiency simple-task group or a low-proficiency complex-task group (40 students each). Another 80 students with high English proficiency were randomly assigned to a high-proficiency simple-task group or a high-proficiency complex-task group (40 students each). Data were collected through pre- and post-speaking tests, pre- and post-surveys, and semi-structured interviews. The expected results will demonstrate that different levels of AI chatbot-mediated speaking task complexity differentially affect learners’ speaking performance across proficiency levels, while also reducing speaking anxiety and enhancing willingness to communicate. Interview results will further indicate generally positive learner perceptions toward integrating AI chatbots into speaking tasks. Overall, this study offers pedagogical insights for the design of AI-mediated task-based speaking instruction.
Developing Multimodal Communicative Competence through ThingLink-Mediated Meaning-Making #4540
As contemporary communication increasingly relies on multiple semiotic modes, traditional views of communicative competence centered on linguistic proficiency require reconceptualization as multimodal communicative competence. In computer-assisted language learning (CALL), this shift calls for pedagogical designs that provide explicit instruction and systematic practice in multimodal meaning-making. This study reports on a classroom-based pedagogical innovation implemented in a university-level Content and Language Integrated Learning (CLIL) course in Taiwan, where English as a Foreign Language (EFL) students created interactive multimodal projects using the digital authoring platform ThingLink. Anchored in the United Nations Sustainable Development Goals (SDGs), the course aimed to develop students’ ability to communicate complex ideas through coordinated linguistic, visual, and auditory resources. Guided by a multiliteracies framework (Lim & Tan-Chia, 2022), instruction followed four learning processes—encountering, exploring, evaluating, and expressing—implemented through structured training activities. Data were collected from students’ multimodal artifacts as indicators of multimodal communicative competence. Findings suggest that explicit scaffolding and repeated practice enhanced visual organization and narrative flow, although challenges in intermodal cohesion persisted. The study underscores the need to redefine communicative competence in CALL as inherently multimodal and demonstrates the pedagogical value of structured multimodal literacy instruction in digital learning environments.
Leveraging AI in EMI for Holistic Student Development #4541
This study examines the outcomes of a 30-week English-Medium Instruction (EMI) program for eight undergraduate students at Globiz Professional University, conducted from April 2025 to January 2026. The curriculum employed advanced AI tools—specifically NotebookLM, Gemini, and ChatGPT—within a Content and Language Integrated Learning (CLIL) framework to enhance English proficiency, intercultural competence, and academic skills. Students engaged in diverse activities, including summarizing English Central videos, refining writing with AI, and completing X-Reading assignments. Each participant produced a 2,000-word research paper and delivered academic presentations. A unique aspect of the program was four weeks of focused, one-to-one interaction with native-speaking mentors, emphasizing global leadership, presentation skills, and research strategies. Comprehensive assessment included pre- and post-course Progos speaking tests and the CASEC computer-based test, with results demonstrating a one-level improvement in CEFR proficiency. Questionnaire responses indicated highly positive attitudes toward AI integration and strong satisfaction with AI-supported learning. Most students reported an expanded worldview and increased academic confidence. The findings suggest that this model of AI-mediated multimodal learning, combined with presentation-centred dialogue, provides an effective and scalable framework for EMI programs, equipping students with essential communicative, cognitive, and intercultural skills for global academic and professional success.
From Blank Prompts to Pedagogical Control: Supporting CI-Aligned Lesson Planning with AI #4542
As generative AI tools become increasingly accessible, many language teachers report a common frustration: although AI can generate lesson materials quickly, the output often fails to align with pedagogical intent, learner readiness, or classroom realities. This practice-oriented presentation argues that the issue lies not in AI capability, but in a mismatch between how teachers plan instruction and how AI systems are typically prompted.
Grounded in established principles of Comprehensible Input (CI) and the Zone of Proximal Development (ZPD), the session foregrounds teacher cognition rather than technology. It examines how experienced instructors routinely make intuitive decisions about learner readiness—deciding when students are ready to understand, use, or extend language—and how time pressure makes it difficult to translate these judgments into explicit lesson plans.
Through classroom examples and a brief demonstration, the presentation illustrates how AI can function as a supervised planning assistant when teachers provide pedagogically meaningful constraints. Emphasis is placed on revision, selection, and rejection of AI output as essential professional practices. A short, optional planning routine demonstrates how AI can reduce planning friction while preserving instructional philosophy. The session contributes to CALL practice by modeling how teachers can integrate AI into existing workflows without relinquishing pedagogical control.
Action Research: PBL and AI Feedback in EFL Business Letter Writing #4543
This study uniquely combines a Problem-Based Learning (PBL) approach with AI-supported learning strategies to examine the potential effects in business writing. The instructional design was informed by constructivist and scaffolding principles, using multimodal materials to support tone, style, and problem-solving. One action research project examines whether combining PBL with AI feedback tools can support critical thinking and strengthen EFL graduate students’ business-letter writing. Therefore, two classroom cycles were implemented: a complaint letter and a travel expense reimbursement task. Five students completed pre- and post-writing tasks and shared their learning experiences through surveys, interviews, and perception reports. Writing changes were tracked using Grammarly (accuracy/readability), ProWritingAid (clarity/style), and Voyant Tools (lexical density). In Phase One, creativity-related indicators showed larger but uneven shifts (e.g., style +122.2%; vocabulary density +12.4%), suggesting that open-ended problem-solving benefited some learners’ expressive development more than others. Phase Two prioritized clarity and produced smaller, steadier gains in writing quality (0.0–3.1%), along with clearer organization and a more professional tone. Students reported that peer and teacher feedback improved planning and revision, whereas heavy reliance on AI sometimes led to reduced attention to overall structure. Overall, these can support creative expression and analytical clarity through iterative, community-based feedback.
Singing Away Grammar Anxiety: Affective and Creative Engagement through LyricsTraining in Thai Primary EFL Classrooms #4544
This classroom action research investigates how a music-based, AI-supported CALL approach can improve grammar learning and reduce affective barriers among Thai primary EFL learners. Grounded in Krashen’s Affective Filter Hypothesis, the study integrates LyricsTraining for interactive song-based grammar tasks, Task-Based Language Teaching (TBLT) for structured pre-task/task/post-task cycles, and Project-Based Learning (PBL) for creative multimodal output. Participants are 45 Grade 5 students (CEFR A1–A2) at La-orutis Demonstration School, Lampang, Thailand, implementing a 10-week intervention across two cycles: Cycle 1 focuses on grammar learning through LyricsTraining and collaborative analysis of target forms (present passive, will-future, first conditional, and present perfect with for/since). Cycle 2 culminates in a group mini project where students compose English songs using SUNO AI and design promotional posters with Canva for Education. Data will be collected through 20-item grammar tests (pre, post-1, post-2), an affective filter questionnaire, observation checklists, student reflection logs, and a song–poster rubric. Quantitative analyses (means, SD, paired-sample t-tests) and qualitative thematic analysis will examine changes in grammar performance, motivation, confidence, and anxiety. Expected outcomes include improved grammar accuracy, reduced anxiety, increased confidence and motivation, and high engagement through AI-assisted creative production.
ESL Speed Reading 2.0: Free Speed Reading Software for Students and Teachers #4545
This presentation showcases the first major version update of ESL Speed Reading, the free speed reading application and learner management system software suite. Speed reading is an important, yet often overlooked, avenue for developing fluency (Tran, 2012). Being able to read fluently frees cognitive resources which can be allocated to higher-order cognitive skills (e.g., text comprehension). Traditional paper-based speed reading programs can be time-consuming for teachers due to administrative overhead (e.g., distributing reading materials). Technological advances have made it easy to implement speed reading digitally. A digital medium removes the administration burden, allowing teachers and learners to focus on the purpose of the activity: to develop reading fluency.
The ESL Speed Reading software suite has been rewritten to be more responsive, and new features have been added. The presenter will demonstrate these features, explain the benefits of a digital medium compared to a paper medium using action research, and describe how teachers can use the learner management system in their classes. Attendees will receive access to a teacher account so they can bring the benefits of speed reading into their classrooms.
From Questions to Clicks: Students’ Perceptions of a CRS-Enhanced Student-Generated Questioning (SGQ) Framework for EFL Reading #4546
This study examines an original computer-assisted reading model that integrates student-generated questioning (SGQ) with Classroom Response System (CRS) technology to support strategic, collaborative reading in EFL university contexts. While CRS tools are widely used in CALL environments, most applications rely on teacher-generated questions; far less is known about how CRS can effectively support SGQ activities that require deeper learner processing. To address this gap, a 16-week intervention was implemented with 67 first-year EFL learners using a multi-component framework: collaborative identification of main ideas and supporting details, creation of specific and wide questions, and whole-class CRS-mediated peer questioning via ClassPoint. A post-treatment perception survey and semi-structured interviews with nine learners revealed consistently positive evaluations of all components. CRS-mediated peer questioning generated the highest engagement and collaboration, while main-idea identification most strongly supported text comprehension. SGQ prompted deeper processing but was perceived as cognitively demanding, particularly when formulating wide questions. Findings demonstrate the pedagogical value of combining SGQ with CRS technology to enhance comprehension, metacognitive monitoring, and collaborative engagement in CALL-supported reading environments.
Developing an N-gram–Based Spoken Lecture Corpus Tool to Support Non-Native EMI Teachers #4547
Many non-native English-speaking instructors face linguistic challenges when delivering English-medium instruction (EMI), particularly in using discipline-appropriate spoken academic phraseology. This paper presents the development of an n-gram–based spoken lecture corpus tool designed to support EMI teachers through large-scale lecture data.
The corpus was compiled from approximately 1,100 open-source academic lecture transcripts across multiple universities and disciplines, resulting in about eight million words. The tool enables users to search four- to six-word n-grams extracted from authentic lectures. Given a target word or phrase, the system generates frequent n-gram patterns and provides multiple contextualized examples from real lecture transcripts, allowing users to observe how academic language is used in spoken teaching contexts.
The tool was introduced to a group of university instructors teaching EMI courses. Informal feedback indicates that the system was perceived as intuitive and useful for lecture preparation, phrase selection, and increasing confidence in English delivery. Participants particularly valued access to spoken academic patterns that are rarely addressed in conventional EMI training materials.
Although large-scale evaluation has not yet been conducted, this study demonstrates the potential of repurposing open lecture transcripts into practical corpus-based support tools and highlights the pedagogical value of n-gram exploration for EMI teacher development.
Designing a Gamified ASR-Based Oral Practice Game Using Google AI Studio for Young ESL Learners #4548
Recent advances in automatic speech recognition (ASR) and generative AI have enabled new forms of oral language practice beyond traditional, test-oriented speaking tasks. This paper reports on the design and classroom use of a gamified ASR-based oral practice platform developed with Google AI Studio for elementary-level ESL learners. The system integrates ASR into a fast-paced pronunciation game inspired by falling-block mechanics. Learners practice words, phrases, and sentences by speaking aloud: accurately pronounced items disappear, while inaccurate attempts cause items to fall and accumulate as “bricks,” ending the game once a preset height is reached.
The platform was rapidly prototyped using Google AI Studio to manage ASR processing, pronunciation tolerance thresholds, and prompt-based feedback, allowing flexible refinement without complex backend development. The web-based game was piloted in an elementary classroom setting.
Questionnaire data and classroom observations indicate generally positive learner responses, including high engagement, increased willingness to repeat pronunciation attempts, and reduced speaking anxiety compared with traditional drill-based activities. Teachers also reported sustained attention and voluntary practice. Although no quantitative pronunciation gains were measured, the findings suggest that gamified ASR environments can serve as effective supplementary tools for young learners and highlight the pedagogical potential of generative AI for CALL development.
From Detection to Instruction: Developing EFL Pragmatic Competence Through Human-AI Comparative Analysis of Multimodal Sarcasm #4551
This research-based presentation reports on a pedagogical intervention using human-AI performance comparisons to teach sarcasm detection to Japanese EFL learners. Initial findings revealed native speakers achieved only 60% accuracy while non-native speakers performed at chance levels (51%) on 200 memes, with AI models achieving comparable performance (54-57%). These insights informed a three-phase instructional framework implemented across 7-8 sessions with 34 participants using pre/post quasi-experimental design. Phase 1 built awareness of semantic-pragmatic incongruity, Phase 2 employed computational pattern analysis to identify linguistic markers, and Phase 3 addressed ambiguous cases where human-model disagreement was highest. Preliminary results suggest systematic exposure to computationally-identified patterns combined with explicit metalinguistic instruction improves pragmatic competence in digital contexts. The presentation demonstrates how NLP model analysis can identify specific error patterns to guide targeted instruction, while highlighting the continued importance of human cultural expertise. Participants will learn practical strategies for integrating computational insights into pragmatic instruction and receive access to our validated multimodal corpus. This work contributes to CALL by bridging computational linguistics with pedagogical practice, offering evidence-based approaches for teaching challenging pragmatic features in technology-mediated environments.
Comparing GenAI-mediated and human-human interactions for developing EFL learners’ listening proficiency beyond the classroom #4552
This study examined the effects of three types of extracurricular listening practice, generative AI-mediated, peer interaction, and video-based, on college EFL learners’ listening proficiency. Eighty-nine participants took part in twice-weekly, 10-minute listening activities over a semester and were assigned to one of three conditions: (1) a GenAI group engaging in interactive listening practice with ChatGPT via smartphones; (2) a peer interaction group conversing with other EFL learners; and (3) a video-based listening group receiving input without conversational interaction. Quantitative data were collected through standardized listening comprehension tests, complemented by semi-structured interviews. Results indicated that the GenAI group demonstrated significantly greater gains in post-test listening comprehension than the other two groups. Qualitative findings suggest that ChatGPT’s adaptive responses, role-playing functions, and sustained interactivity facilitated more engaging and contextually rich listening practice. The peer interaction group also showed improvement, which participants attributed to increased opportunities for meaning negotiation and communicative engagement, though learning outcomes were influenced by partner availability and proficiency. The video-based group exhibited steady gains supported by consistent input and flexible access; however, learners reported limited interaction and reduced engagement over time. These findings suggest that GenAI-mediated interaction offers an adaptive supplement to classroom listening instruction in extracurricular contexts.
Multimodal Apprenticeship with GenAI Chatbots: ESL Learners’ Reflections on Vlog-Based English Oral Communication #4553
Short-form video production and generative artificial intelligence (GenAI) tools are reshaping how learners compose, perform and refine multimodal texts. While vlogs and GenAI chatbots have individually gained traction in language education, little is known about how learners experience these tools together as part of a multimodal apprenticeship. This study examines the written reflections of Malaysian ESL undergraduates after completing a vlog-based English oral communication project that utilised generative AI (GenAI) chatbots as optional support tools. Thematic analysis of 34 reflections revealed how learners used chatbots to brainstorm ideas, draft scripts, refine storylines and receive guidance in editing and production. The findings demonstrate how learners engaged with multimodal compositional processes from planning, scripting, filming, and editing while developing autonomy, confidence and technical competence. Drawing on self-determination theory (SDT) and zone of proximal development (ZPD), the analysis shows how GenAI chatbots functioned as cognitive scaffolds and co-creators. It supports the element in the underpinning theories supporting autonomy, competence and relatedness. While peer and human examples remained central sources of modelling. Learners valued vlogging as a space for self-expression and identity work, yet challenges were reported in relation to AI accuracy, time and technical constraints.
Digital Literacy in Motion: Teacher and Learner Sense-Making in Singapore’s Chinese-as-a-Second-Language Classrooms #4554
As digital tools, multimodal communication, and AI increasingly shape the landscape of language learning, the question for educators is not simply whether technology is used, but how digital literacy is understood, enacted, and negotiated within real classroom ecologies. This presentation reports findings from Phase 1 of a multiphase mixed-methods study investigating evolving digital literacy practices in Singapore’s secondary Chinese-as-a-second-language (CSL) classrooms. Drawing on classroom observations, teacher interviews, student focus groups, and survey data, the analysis is guided by digital literacy pedagogical framework (Kurek & Hauck, 2014). Findings reveal an emerging yet uneven landscape of practice. Forward-thinking, pioneering teachers interpret digital literacy as encompassing multimodal composing, critical awareness, and AI-mediated meaning-making, yet their enactments are shaped by tensions between creative exploration and assessment pressures, structured scaffolding and learner autonomy. Students respond positively to multimodal tasks, but often engage superficially with AI tools and struggle to connect classroom digital practices with everyday digital engagements. Rather than a linear narrative of innovation, the results illustrate a dynamic interplay of affordances, constraints, and adaptive sense-making. Many challenges observed are systemic—rooted not in individual actors but in broader structures. The study contributes to ongoing CALL discussions on designing contextually grounded and multiliteracies-informed digital literacy pedagogies.
Teaching Document Design in a Digital Communication Course: An Experiential Approach #4555
As communication increasingly involves both textual and visual literacy, language classrooms still tend to prioritize text, leaving visual communication skills underdeveloped. This presentation reports on an attempt to teach document design in a 15-week Digital Communication course. Grounded in aesthetic instructional design, theory-based document design, and agile iteration, the course progressed from learning design fundamentals to hands-on projects, with several layers of scaffolding provided along the way. Students created a document set consisting of a flyer and a promotional video for an authentic campus activity using Canva, a novice-friendly digital design tool.
After outlining the course design, the presentation reflects on the benefits and challenges that emerged during its implementation. One key finding is that working on a real-life campus activity can meaningfully engage students. At the same time, while implementing an agile approach in an EFL context is feasible, it requires efficiency in teaching foundational concepts as well as flexibility in responding to rapidly evolving tools such as Canva. The presentation concludes by arguing that, even in the age of generative AI, such coursework continues to offer valuable learning experiences whose outcomes may be more lasting and meaningful than relying on machines to produce technically impeccable design artifacts.
Eigo.AI: Assisting and Enhancing Human Language Learning #4559
AI should assist and enhance human-centered language teaching and learning, not replace it. Established principles in second language acquisition remain central: learners need sustained exposure to comprehensible input, meaningful opportunities for output and interaction, structured fluency development, and timely feedback. Eigo.AI is designed around these principles. The platform offers a comprehensive library of AI-generated, human-proofread lessons across proficiency levels. Each lesson integrates listening, reading, speaking, and writing through seven structured activity types, ensuring balanced skill development. Students receive immediate feedback on pronunciation, writing, and discussion performance, while teachers maintain full oversight through detailed tracking tools.
Eigo.AI also addresses a practical constraint in most programs: time. Reaching advanced proficiency typically requires more than 2,500 hours of focused study, far beyond what classroom instruction alone can provide. The platform extends structured learning beyond class hours while keeping progress visible and measurable. Teachers can monitor engagement, review student output, and intervene when needed. In this way, AI supports informed instructional decisions without displacing teacher expertise. This session will demonstrate how Eigo.AI combines established pedagogy with practical implementation, offering institutions a scalable and classroom-ready solution that strengthens learner development and preserves teacher agency.
Turn It Off: The Hidden Costs of AI Policing #4561
This paper argues that the use of AI detectors carries significant hidden costs. These harms fall into four broad categories: harms to personhood, harms to knowledge, harms to institutional justice, and harms to relationships. Focusing on AI detectors in academic integrity contexts, I examine how these systems risk false accusation, shift the burden of proof onto students, and intensify the unequal patterns of surveillance already borne by multilingual writers, disabled students, and students of color. Drawing on recent research on AI detector inaccuracy and bias, as well as institutional decisions such as Vanderbilt University’s disabling of AI detection, I argue that these tools are not merely imperfect but ethically dangerous. Methodologically, the paper takes a philosophical approach at the intersection of the philosophy of AI and the philosophy of education, using a utilitarian framework to identify the predictable harms produced when AI detectors probabilistic outputs are treated as evidence and educator suspicion becomes grounds for accusation. The paper concludes that we should not build unreliable and discriminatory policing mechanisms into educational life. Humane responses to generative AI should move away from policing and toward assessment design and integrity practices grounded in transparency, accessibility, trust, and pedagogical care.
Development of an AI-Powered Tool for Mastering Nonverbal Communication using Web Browsers #4562
We are developing an AI-powered, web-based system to analyze the nonverbal communication (NVC) skills of EFL students in Japan. Teaching verbal communication (VC) skills is challenging due to typical student-to-teacher ratios of 40:1, with NVC requiring even more individualized instruction and feedback. In this case study, we focus on the AI-enhanced video analysis tool we are developing to support the final system. We analyze the facial expressions (perceived emotions and gaze/engagement) and body language (head and facial movements) of EFL students (aged 18–20) using standard webcams and correlate these data with online assessments from a diverse group of human judges. We conduct mock (simulated) online job interviews, as this method allows us to achieve two goals: 1) primarily collect data to improve the functionality of our NVC training system, and 2) secondarily provide students with feedback that can assist them in other online interview contexts, such as TOEFL, IELTS, and future academic or professional interviews. We are developing our final system in-house and will showcase our software methodology and its current developmental stage. Once complete, our system will enhance students' NVC skills by increasing their awareness, motivation, and self-efficacy, while improving their online interview performance.
Navigating Perplexity in Listening: Cognitive and Affective Responses of College Students #4563
Listening is a critical skill in second language acquisition, yet it remains one of the most challenging for English as a Foreign Language (EFL) learners, particularly college students who are developing higher-level academic listening skills. Traditionally, studies on listening comprehension focus on identifying barriers such as fast speech, unfamiliar vocabulary, or accents that hinder understanding. While these studies provide valuable insights, they often frame listening difficulties as external obstacles, overlooking the internal cognitive and affective processes learners experience while navigating these challenges. This study shifts the focus from identifying listening barriers to examining perplexity as a cognitive–affective experience among college-level EFL learners. Perplexity refers to moments of confusion, uncertainty, or mental overload that occur when learners process spoken English.
Using a qualitative research design, the study involves undergraduate students enrolled in an English course where podcasts are used as material for observation. Data are collected through semi-structured interviews and classroom observations of podcast-based listening activities submitted as part of students’ coursework. The findings aim to provide insights for designing listening tasks and assessment practices that better support learners during moments of confusion in L2 listening.
Teaching Students to Challenge GenAI: A Classroom Intervention for Critical AI Literacy #4564
As Generative AI (GenAI) becomes increasingly embedded in language classrooms, many EFL learners risk becoming passive consumers of algorithmic outputs, often deferring to AI’s perceived “native-like” authority. Such reliance can undermine learner agency and reinforce standard language ideologies. This study reports on a classroom-based pedagogical intervention, Speaking Back to AI, designed to help Japanese university EFL students critically engage with GenAI tools. Drawing on the APSE model of Critical AI Literacy, the intervention was implemented as a short module within a regular EFL course. About 50 Students participated in iterative “red-teaming” activities, evaluating GenAI-generated content for hallucinations, linguistic stereotyping, and pragmatic misalignment with local norms. Rather than revising their work to align with AI suggestions, students produced multimodal artifacts, such as annotated AI outputs and short reflective digital products, that corrected, resisted, or recontextualized problematic outputs. Data sources include screen-recorded red-teaming tasks capturing students’ testing and negotiation with GenAI outputs, reflective journals, and pre- and post-intervention surveys. Preliminary observations suggest increased learner awareness of AI limitations and greater willingness to assert authorial voice when negotiating with GenAI outputs. The presentation concludes with reflections on challenges and how similar Critical AI Literacy interventions can be adapted for other EFL contexts.
Learners’ engagement with the affordances of dialogic multimodal feedback for university students' L2 academic writing: An embedded single case study in the Vietnamese context #4565
Given the potentials of dialogic multimodal feedback (DMF) for L2 writing and the lack of research about students’ engagement with DMF, this study was conducted under the lens of Affordance Theory to explore how university students engage with the affordances (learning possibilities) of DMF for their L2 academic writing and the factors influencing their engagement with DMF (screencast feedback with dialogic features such as commenting or reacting). This case study took place in a class of 31 students studying an EMI course involving students’ L2 writing activities at a Vietnamese university. Three major data collection instruments included a student questionnaire, student in-depth interviews, and interaction logs with DMF. Findings reveal that students actively engaged with various affordances of DMF behaviorally, cognitively, agentively, and emotionally in different ways, such as rewatching, adjusting speed, and proactively interacting with DMF by asking questions. Emotional engagement is both a prominent form of engagement and a mediating factor, interacting with other engagement dimensions. Interestingly, though most students showed active perceived engagement with DMF, the interaction logs revealed certain dissimilarities in students’ actual engagement with DMF. Finally, cultural factors and learner differences are two potential factors influencing students’ engagement with DMF, especially in Vietnamese collectivist culture.
Moving towards pedagogy of care: A case study of Vietnamese university students' emotional engagement with interactive video feedback for L2 academic writing #4566
Since teaching is an emotional labor, understanding the affective dimension of student engagement with digital feedback is essential. Following the pedagogy of care, this study investigated students’ emotional engagement with interactive video feedback (IV-feedback) for L2 academic writing (screencast feedback allowing students to directly interact with the videos via commenting or reacting features). This case study was conducted in an EMI class of 31 students learning an EMI course involving L2 academic writing activities at a Vietnamese university. Three main data collection instruments included a students’ questionnaire, students’ in-depth interviews, and teaching journals. The findings show that IV-feedback allows learners and lecturers to embrace the pedagogy of care in a culturally responsive way. Most students expressed more positive emotions with IV-feedback than text feedback (mostly thanks to exposure to teacher tone, voice, and social presence). Notably, students felt safer and more willing to interact with IV-feedback than face-to-face feedback, as IV-feedback creates a safer dialogic space to raise questions, especially in Vietnamese collectivist and face-saving culture. These positive emotions are perceived as motivation to revise their writing, potentially leading to improved L2 writing skills. However, some students also expressed certain negative emotions, suggesting the need for careful implementation of IV-feedback.
EFL Learners’ Perceptions of Speaking Anxiety: In-Person, Online, and Virtual Reality #4567
Virtual reality (VR) has attracted growing interest in computer-assisted language learning for its potential to provide immersive communication experiences. While prior research has often examined VR as a standalone tool, fewer studies have directly compared learners’ affective perceptions of VR with more established communication modalities. In particular, how foreign language anxiety (FLA) and interactional preferences differ across in-person, online, and VR-based communication remains underexplored. This study reports findings from a survey of 181 Japanese university EFL learners, mostly English majors, examining speaking-related FLA, attitudes toward communication modalities, and partner preferences (friend vs. stranger) across three contexts: in-person interaction, synchronous online communication (e.g., Zoom), and immersive VR. The focus is on learners’ preconceived perceptions rather than post-intervention experiences. Quantitative analyses are complemented by open-ended responses to better understand learners’ reasoning. Overall patterns indicate clear modality-based differences in perceived anxiety, acceptance, and preferred interactional conditions. While in-person communication is strongly favored, technology-mediated environments, particularly VR, appear to influence how learners evaluate social risk and partner choice. The presentation will outline these patterns, discuss implications for affective factors in technology-mediated communication, and consider how modality choice may shape learners' willingness to communicate.
Building AI workflows: A practical approach for language teachers #4568
Many language teaching tasks work better through structured AI workflows that coordinate multiple tools rather than relying on a single AI tool. For example, generating realistic multi-character conversations, providing scaffolded feedback that adapts to student responses, or managing feedback across many students often requires a sequence of coordinated AI actions. A typical workflow might allow students to record a spoken dialogue, have the audio automatically transcribed, receive targeted feedback on vocabulary, grammar, and fluency, and then obtain an overall evaluation or grade.
This practice-oriented session presents how teachers can build such workflows using AI tools to complete multi-step pedagogical tasks. Co-presented by a language teacher with no programming background and an app developer, the session presents classroom-tested examples including generating multi-character dialogues, building personalised feedback cycles for student writing, and managing scalable homework correction.
Participants will see how chaining instructions allows teachers to guide AI behaviour more reliably than relying on a single tool. The session also demonstrates how local tools can be combined with online services to improve privacy and reduce platform lock-in.
Participants will leave with practical examples and a simple framework they can adapt to their own teaching contexts.
Using AI to scaffolding readings with embedded readings #4569
Scaffolding reading can be challenging, especially for busy teachers. One major challenge is providing students with enough encounters with new vocabulary across varied and meaningful contexts in order for learning to occur. One approach that has become popular in Comprehensible Input–based teaching approaches is embedded readings developed by practitioner Laurie Clarcq. These are a series of leveled texts that recycle the same target vocabulary in meaningful ways while gradually increasing complexity and difficulty. When creating embedded readings, AI can be a powerful tool to support learners in acquiring language and save teachers time. In this presentation, we will show how AI can be used to quickly create embedded readings, how teachers can guide students to use AI appropriately for this purpose, and how these tools can support language learning. We will also compare embedded reading to other reading passages created by AI, emphasizing tradeoffs. Finally, we will end by discussing possible challenges, such as AI hallucinations and maintaining effective vocabulary control and repetition.
Fostering Spoken Corpus and AI Literacy for Preservice English Teacher Training #4570
This qualitative study aims to initiate a 5-month online yarning circle (relational indigenous methodology fostering mutual help through reflection and discussion [Shay, 2021]) in teacher training on enhancing English pre-service teachers’ (EPTs’) corpus and AI literacy in English oral teaching skills. Three doctoral students who had received applied linguistics training for over 4 years served as mentors and paired up with 3 undergraduate students who were EPTs. The three mentors facilitated weekly discussion with their mentees on key indicators that influence English oral performance and communication strategies that could be used in English oral communication and co-developed corpus and AI-integrated English speaking lesson plans.
Qualitative results identified the effectiveness of this online yarning circle. Both mentors and mentees highlighted the benefits gained from their partnerships in English oral teaching. The EPTs reported growth in their linguistic knowledge, corpus and AI literacy, and the ability to integrate linguistic knowledge with educational technology to design English speaking lessons. Mentors learned how to develop teacherfriendly resources from EPTs. For instance, mentors provided guidelines on key indicators that influence English speaking performance. The EPTs offered suggestions to make the indicators more suitable for secondary school contexts based on their teaching practice in local schools.
Rethinking Summary-Writing in the Age of Generative AI: "Old" vs "New" #4571
In the 2023/2024 academic year, students in a university-level academic writing course completed a research project which included an annotated bibliography assessment. At the end of the course, 683 students responded to an open-ended survey about their use of generative AI during the assignment. Half of the students reported using AI to complete their annotated bibliography and some provided descriptions of how AI was used. The number of students who reported submitting wholly or partly AI-generated summaries raised questions about revising summary-writing tasks in future courses, including if summarizing should still be taught and assessed. This presentation will briefly look at “old” summarizing skills like reading, annotating, note-taking, paraphrasing, condensing, and citing before exploring possibilities for developing “new” summarizing skills in the age of AI (e.g., critically comparing AI-generated summaries, verifying AI content, developing students’ own voice as authors, maximizing learning with AI summarizing tools). The presentation will conclude with pedagogical implications by inviting attendees to share their thoughts on the future of summary writing in academic contexts.
AI Exploration Projects for English Language Learners #4575
This presentation reflects on a project that aimed to help Japanese university students use artificial intelligence (AI) creatively and critically for English learning. The project involved in-class group work and out-of-class self-study. In the initial stage, groups created a three-week plan, selecting AI-based self-study tasks that addressed self-selected learning objectives. Each week they individually put the plan into action and recorded study notes outside of class, then met their group and teacher in class to reflect on their experiences, brainstorm solutions for problems, and modify their plans when necessary. At the project’s conclusion, students submitted a reflective report about their experiences and learning outcomes. Twenty-six students from eleven groups consented to their coursework being treated as data. Groups with different learning goals (conversation skills, writing, reading, vocabulary) will be presented, drawing on examples of learning plans, problems faced, problem-solving strategies, and students’ reflections on using AI as a learning tool. The presentation will also include reflections on project design, implementation, and initiatives taken by the teacher to actively support students, along with changes planned for future projects. This study highlights the importance of teacher guidance for student exploration of AI and could inform practice in other contexts.
Data Collection Issues in Subtitling Process Research #4578
This study investigates EFL students’ subtitling workflows for self-produced mini documentaries, drawing on Romero-Fresco’s (2013) accessible filmmaking framework and Tardel’s (2023) subtitling model. Participants are students in an elective English course at a Japanese national university where they conduct an interview in Japanese, edit the video and create reverse subtitles in English; authentic tasks that involve multimodal literacy skills. The research employs an in-depth qualitative case study design with thick description to ensure reliability (Riege, 2003), supplemented by quantitative measures. The pilot (n=1) failed when think-aloud protocols (TAPs), the original main instrument, proved unsuitable due to scheduling constraints, TAP limitations for complex multimodal tasks, and language barriers, yielding insufficient workflow data. For the main study (n=2 focal participants), we pivoted to triangulation (Yin, 2018): home-based extended screen recordings for ecological validity and depth, in addition to field notes, logbooks, learning diaries, and interviews. These are bolstered by materials from consenting classmates (n=20) to strengthen the findings. Preliminary analysis suggests this combination overcomes pilot failures, enabling richer data on subtitling processes and demonstrating how methodological setbacks can strengthen research design.
Generative AI in L2 Summary Prewriting: A Counterbalanced Classroom Study #4579
This study examines the effects of generative AI assistance during the prewriting phase of a second-language (L2) summary writing task in a university English course. Fifty-five Japanese EFL students participated in a counterbalanced within-class design involving two summary tasks completed one week apart. In each task, learners used computers to study for 12 minutes a listening script from a homework listening completed the previous week, then wrote a handwritten summary of approximately 60 words from memory. In the AI condition, learners used generative AI tools (e.g., ChatGPT or Copilot) during prewriting; in the non-AI condition, they relied mainly on machine translation. Learners completed two listening-script summaries and switched conditions between tasks.
Summary quality was assessed for content coverage, syntactic complexity, and formal accuracy. Vocabulary learning was measured using a simplified Vocabulary Knowledge Scale administered before and after each task and on a delayed post-test. Post-task surveys and self-reported AI chat logs documented tool use. Learners used AI to identify key ideas, simplify language, generate model summaries, and check drafts, while students relying mainly on machine translation (often Google Translate) showed greater gains in key vocabulary. Full results on writing quality and relationships between AI usage strategies and outcomes will be presented.
Beyond the Output: Scaffolding Critical Engagement with GenAI across two Academic English Assignments #4580
This presentation reports on a four-stage pedagogical intervention where 50 EAP students collaborated with generative AI across two assignments. Critical engagement was fostered through: (1) L1/L2 drafting without AI; (2) using AI for ideas/language while preserving personal voice through review; (3) soliciting AI feedback via a teacher-designed prompt aligned with rubric criteria; and (4) producing a final draft after critically reviewing AI feedback. Moving beyond linguistic outcomes, this study examines how students navigated human-AI collaboration, drawing on Ng et al. (2021)’s four-aspect framework of AI literacy (know and understand, use and apply, evaluate and create, and ethical issues). Data include students’ written assignments, stimulated recall interviews with seven student groups stratified by AI feedback solicitation patterns and grade, and students’ evidence-based reflections on future AI use intentions. This longitudinal design tracks how initial experiences shape evolving AI literacy. The presentation argues that "success" in CALL must be redefined—shifting from measuring polished products to valuing the development of AI literacy dimensions: prompt crafting, evaluative judgment, agential integration, and ethical awareness.
Reference Ng, D.T.K., Leung, J.K.L, Chu, S.K.W., & Qiao, M.S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041
Cross-Cultural Learning in CALL: How Young Learners gained Motivation #4581
Young learners aged 13–15 have been learning English at this small private school for 90 minutes every Wednesday night, but they have struggled to stay motivated in their language learning. The biggest issue was that they were bored of speaking English with the same Japanese classmates, so the presenter started preparing global projects to encourage active learning. In 2024, ten students undertook a homestay project in Taiwan and exchanged emails with students in Norway. In 2025, they engaged in cross-cultural projects with students in Sweden (writing letters) and Ukraine (video letters on Padlet). To examine their increased motivation, questionnaires and interviews were conducted to collect qualitative data. All students stated that these cross-cultural projects were meaningful, and that they were motivated to produce English content and make more effort to improve their language skills. However, there are limitations to stating that cross-cultural learning in CALL helped increase their motivation, as only qualitative data was analysed. Nevertheless, it is worth sharing information about the activities of this small private school with a global focus in CALL, as the students were inspired by their counterparts around the world and this changed their approach to learning at the school.
Empowering Listening Input with AI-Generated Audio: Exploring TTS Applications for EFL Classrooms #4582
Input is foundational to second language acquisition. With the explosion of digital media, practically unlimited resources are now available to both learners and educators. Yet in practice, while a rich ecosystem of graded readers and skill-focused textbooks support abundant reading input, resources for listening are comparatively scarce, with materials often tied to reading passages, test prep, or authentic content that may not suit a given class. This presents a challenge for educators seeking listening materials with appropriate proficiency, length, content, and language focus. To this end, recent advancements in text-to-speech technology, particularly the improved naturalness, customizable tones, and multi-speaker support, have revolutionized the ease with which educators can create highly tailored listening materials. This presentation will explore the practical integration of AI-generated audio in EFL classrooms, ranging from extensive listening to focused classroom tasks. The presenter will share his own successes and challenges with listening materials such as example dialogues using target vocabulary for fluency practice, alternative versions of reading passages for comprehension tasks, and audio prompts for test items. Survey results from first- and third/fourth-year Japanese university students on the perceived usefulness, engagement, and preferred applications of AI-audio listening tasks will also be presented.
Workplace-Specific Language Learning for Migrant Workers Using Digital Games #4583
This presentation reports on a research-and-development project conducted in a Danish slaughterhouse where migrant workers constitute a substantial portion of the workforce and where language and communication difficulties are a persistent feature of everyday work practices. For many employees, formal Danish language education has not been accessible or sufficient to meet highly situated, task-specific communicative demands. To address this gap, a digital, game-based language learning application was developed based on extended ethnomethodological fieldwork. Language learning activities were embedded within a virtual slaughterhouse environment that reflects workers’ procedural and interactional realities, enabling engagement with vocabulary directly tied to daily tasks. The presentation focuses on both the ethnographic foundations of the design process and preliminary findings from implementation. Quantitative analyses of in-game interaction data and pre- and post-test vocabulary measures indicate significant vocabulary development following gameplay. Regression analyses identify pre-test proficiency, in-game performance, task difficulty, and the temporal distribution of gameplay as significant predictors of vocabulary gains, while clustering analyses suggest that learners with lower initial proficiency benefited most from more widely spaced gameplay. Overall, the findings demonstrate how ethnographically informed, performance-dependent, and temporally distributed game-based interactions can support meaningful workplace vocabulary development across proficiency levels.
A research synthesis on the use of generative AI in non-English L2 settings #4584
Generative artificial intelligence (GenAI) has received considerable attention in second language (L2) research since the release of ChatGPT. A recent systematic review by Li et al. (2025) identified 144 peer-reviewed papers on GenAI in the L2 context within two years. While this review highlights the affordances and constraints of GenAI, most of the included studies focused on the L2 English context. Therefore, more research is needed to understand how learners of non-English languages perceive and experience GenAI in L2 learning settings. This presentation reports on a study that addresses this gap in the literature through a synthesis of GenAI studies on L2 learning in non-English contexts. Web of Science will be used to search for primary studies involving GenAI and the L2 learning of non-English languages between 2022 and 2026. Data will be reported following PRISMA guidelines. Titles, abstracts, and full texts will be screened using predefined inclusion criteria. Methodological features of the studies will be analyzed to identify trends. The findings from the included studies will be synthesized using thematic analysis to identify the benefits and challenges associated with the use of GenAI in the teaching and learning of non-English L2s. Implications of the study will also be discussed.
AI-Mediated Writing Behaviors and the Development of Metacognitive Revision Skills in the L2 Writing in higher Education #4585
The rapid development of Artificial Intelligence (AI) tools in recent years has substantially transformed learners’ writing behaviors in English as a Foreign Language (EFL) contexts, shifting the focus from traditional linear drafting to recursive, AI-mediated writing processes. This emerging workflow involves iterative text revision, idea composition, AI-assisted consultation, and critical acceptance or dismissal of algorithmic suggestions. While these evolving behaviors highlight AI’s role in enhancing lexical output and reducing cognitive load, the long-term impact on learners' metacognitive development remains a critical gap. In this study, I investigate how these AI-mediated writing behaviors influence learners’ meta-cognitive agency, specifically focusing on self-correction behaviors as an indicator of cognitive engagement and modification patterns.
To examine the effect of the AI-mediated writing process on students’ metacognitive revision of argumentative essay writing, I adopted a mixed-methods approach to explore learners' perceptions and their writing outcomes. The findings reveal the cognitive mechanisms underlying AI-assisted correction in this writing task, offering pedagogical insights into the evolution of human agency and automated feedback in the EFL classroom. The challenges and the implications of utilizing AI in EFL writing classrooms will be further discussed.
Image Generators as Feedback for Descriptive Writing: Successes and Challenges #4586
Advances in AI-powered photorealistic image-generating tools provide the opportunity to make descriptive writing visible to students and teachers alike. EFL students often struggle to write detailed descriptions because of a lack of opportunity to write for an audience, delay in feedback from instructors, or an overabundance of caution. This practice-based presentation describes one instructor’s attempt to use Adobe Firefly to improve student writing on an academic writing course at a private Japanese university based on an idea from Warner (2024). Using stock photographs from Unsplash, 74 students were tasked with describing a target image in four writing and feedback cycles. The short feedback cycle enabled the students to notice, with more precision, how their own writing had been interpreted. The task succeeded in providing motivation for the participants, and most students wrote increasingly detailed descriptions. However, the AI image generators failed to help students with certain aspects of grammar, spelling and adjective word order. Additionally, for future success, custom-made programmes may be needed to prevent rewarding generic descriptions over specific and accurate ones. This presentation will interest those who wish to use AI to improve student writing, and feedback on how to develop the idea will be welcomed.
Examining Linguistic Shifts in Student Writing Before and After the Launch of ChatGPT #4587
The prevalence of Large Language Models (LLMs), such as ChatGPT, has sparked debate regarding their influence on student academic writing. While existing literature has predominantly investigated LLM usage through quantitative methods—such as word frequency statistics and probability-based detection—few studies have systematically assessed the potential impact of these tools on the linguistic characteristics of student writing. To bridge this gap, the researchers analyzed 1,000 research papers written by fourth-year students in humanities disciplines at universities in Hong Kong. Using a self-compiled corpus, this study compared papers written before and after the launch of ChatGPT, specifically examining two linguistic features: namely, the frequency of LLM-preferred vocabulary and syntactic complexity. This study uses the Tool for the Automatic Analysis of Syntactic Sophistication and Complexity (TAASSC) to calculate 14 measures of syntactic complexity, which can be categorised into five types: length of production unit, amount of subordination, amount of coordination, phrasal structures, and sentence complexity. The results indicate a significant rise in LLM-preferred lexical patterns, demonstrating the influence of generative AI on student argumentative writing. Preliminary findings also indicate that syntactic complexity declined, pointing toward simpler sentence structures. These findings provide valuable insights for educators seeking to distinguish between human and AI-generated text.
AI-Supported Inclusive CALL for Autistic Learners in Pakistani ESL Classrooms #4588
This study examines how AI-enhanced Computer-Assisted Language Learning (CALL) tools can improve English language learning among autistic students in Pakistani inclusive classrooms. The aim is to examine whether adaptive AI characteristics—such as individualized pacing, multimodal interfaces, and computer-generated feedback—can facilitate engagement, comprehension, and emotional regulation among learners with autism spectrum disorder, while also benefiting neurotypical learners. The study adopts a small-scale mixed-method intervention design conducted in an urban Pakistani ESL context with 30 participants, including autistic and neurotypical learners. AI-based language learning applications were implemented over a period of eight weeks. Quantitative data were collected through pre- and post-language assessments, while qualitative data included classroom observations, teacher interviews, and parental feedback. The design incorporated ethical considerations and cultural relevance. The findings indicate improved vocabulary retention, reduced anxiety during language activities, and greater learner independence among autistic students. Teachers also reported that AI-supported tools did not increase instructional workload and supported differentiated instruction in inclusive classrooms. This study contributes to the field of CALL by highlighting AI-driven inclusive language learning within a Global South context. It offers practical implications for designing inclusive CALL environments, policy recommendations for integrating special education within mainstream ESL classrooms.
Perceptions, Acceptance and Use of AI Tools in EFL Learning: Evidence from a Regional University #4589
The increasing availability of artificial intelligence (AI) tools has begun to reshape English as a Foreign Language (EFL) learning in higher education. However, students’ engagement with these tools extends beyond practical use to include evaluative considerations regarding their academic appropriateness. This study adopts a mixed-methods design to investigate EFL students’ perceptions, acceptance, and use of AI tools at a regional university. Quantitative data were collected through a questionnaire administered to approximately 300 undergraduate EFL students, examining multiple dimensions of perceptions, levels of acceptance, and self-reported AI tool use in EFL learning. Descriptive and correlational analyses were conducted to explore relationships among these constructs. Qualitative data from open-ended responses and follow-up interviews further illuminate students’ reasoning when reflecting on AI-supported practices. The results indicate generally mixed perceptions of AI tools, with acceptance and use varying among students. Significant associations were found between perceptions, acceptance, and self-reported use. Qualitative findings reveal that students’ views are shaped by considerations related to academic integrity, perceived educational values, and contextual norms. The study highlights the role of mediating acceptance and contributes to discussions on what constitutes acceptable, legitimate, and responsible AI use in EFL education within regional university contexts.
Exploration of Indonesian EFL Teachers AI Competency to Design a Research-Based Professional Development Program #4590
UNESCO’s AI competency framework for teachers (Cukurova & Miao, 2024) describes the teacher’s AI competency in five aspects: human-centred mindset, ethics of AI, AI foundations and applications, AI pedagogy, and AI for professional development. At the heart of teacher professionalism, AI pedagogy is arguably the most important aspect, as it equips teachers to integrate AI purposefully and effectively into their teaching. Several studies on the implementation of AI in teaching and learning have called for teacher training to ensure successful AI pedagogy. However, to develop competent AI pedagogical capacity, teachers need to first acquire the other four aspects outlined in the framework. A preliminary study of other aspects is necessary to develop AI pedagogy training that aligns with their current level of AI literacy. The study aimed to discover the state of Indonesian EFL teachers' AI literacy and their most pressing training needs. Through a wide-scale online survey of 578 English teachers in Indonesia, the study gained insights into the current level of teachers’ AI literacy and analysed their teacher professional development needs, which contributed to the development of research-based AI pedagogy training aligned with the field's realities.
AI from Day One: Linguistic Control in a Reading Comprehension Tool for Absolute Beginners of Spanish #4591
Generative AI offers new opportunities for second-language absolute beginners, but maintaining linguistic control over AI-generated output remains a challenge. This presentation introduces and evaluates an AI-based tool designed to address this issue.
The tool was introduced in a Spanish entry-level course after three weeks of instruction and is continuously updated to align with curricular progression. To date, it has provided on-demand reading comprehension activities to more than 700 learners at Xi’an Jiaotong-Liverpool University (China). The system operates as an AI agent that integrates multiple workflows and does not require model training or fine-tuning. Instead, it builds on the authors’ previous research on prompting techniques for enforcing linguistic constraints.
The presentation focuses on two areas. First, it outlines the tool’s design strategies for maintaining linguistic control, managing complexity, and increasing output variability. Second, it reports findings from an evaluation of 360 AI-generated texts produced across different curriculum stages and text types.
Results show high levels of lexical and grammatical control, providing evidence that linguistic control can be scaffolded and sustained in AI tools for absolute beginners. Attendees will gain practical insights into designing AI systems for absolute beginner-level instruction without programming backgrounds, with applications transferable to other languages and proficiency levels.
Interactive Oral Assessment: A Catalyst for Enhancing Academic Integrity and Critical thinking in AI-assisted learning #4593
This qualitative study aims to explore Interactive Oral Assessment (IOA) as the real-time oral interaction in helping students critically respond to AI-generated contents, and to synthesize and reconstruct their written work beyond AI suggestions. The IOA questions, designed using the six-level scale from Bloom’s taxonomy, are considered to enhance academic integrity among EFL learners in their writing classes. Ten second-year English majors at a university in Vietnam participated in this study. Data from student reflections, semi-structured interviews, students’ writing samples, and AI usage history were analyzed. Findings suggest that IOA positively influences students’ responsibility regarding their use of AI-powered tools to aid their writing processes. Rather than solely relying on AI to generate answers related to vocabulary or sentence structures (Level 1- Bloom Taxonomy), participants began to prompt questions that reveal their ability to analyze or evaluate the ideas provided (Level 3 or 4). Additionally, the study confirms the positive impact of IOA in enhancing the originality of student work. The importance of this study lies in its suggestion to develop IOA as a robust assessment framework that leverages AI-powered writing tools while upholding academic integrity—emphasizing students’ responsibility and ownership in the EFL writing classroom.
Is Your Chatbot Helping or Harming? A Framework for Designing Context-Specific AI Language Partners #4595
As AI chatbots become mainstream in education, instructors in particular are wondering if these chatbots help students prevail—or fail. In English for specific purposes (ESP) environments, chatbots offer a promising solution for the speaking practice gap in which students cannot get enough feedback from an instructor. However, generic chatbots like ChatGPT are developed to be “helpful” and will autocorrect errors, “interpret” unclear responses, and ignore silences, thereby doing the cognitive work for learners. There is still a debate on whether AI helps or inhibits ESL learning. Moreover, technostress still prevails for both students navigating unfamiliar learning methods and teachers forcing chatbots to fit contexts they were not built for. Here, I present a framework for designing AI learning partners for self-directed speaking practice in a large-class, tertiary-level ESP communication course (n=117). The tasks were designed to be adaptable to different role-playing scenarios, outcome targets, and language levels. I will share lessons learned from my struggles with the frequent changes in free chatbot services and my successes in engineering a “good enough” stable AI speaking partner for my ESP context. My experiences will help educators design or refine chatbot activities to match their learners, ESP objectives, and classroom reality.
Designing and Evaluating a Custom GPT for Conversation Practice with University Students #4596
This study explores how Japanese university students used a custom GPT chatbot to support English speaking practice in a first-year Basic English Communication course. The chatbot was built using conversation scripts and target vocabulary from the course textbook (Conversations in Class by Alma publishing), allowing students to practice class content through structured AI-mediated dialogue. Participants were 14 first-year students. Over an eight-week period, two groups completed different forms of chatbot practice. One group conducted short daily conversations (3–5 minutes) using free ChatGPT accounts outside class. A second group participated in longer weekly sessions (10–15 minutes) in the instructor’s office, which were video-recorded for analysis. The chatbot provided immediate feedback through prompts, compliments, reformulations of student responses, and suggestions for more natural expressions. Quantitative data included chatbot usage logs, pre/post speaking tasks measuring fluency, and Likert-scale surveys on motivation and perceived usefulness. Qualitative data came from reflection prompts and interviews, and analysis of students’ behavior and body language during the recorded speaking sessions. Results showed that students were generally enthusiastic about using the chatbot to improve their speaking ability. However, some participants experienced difficulties interacting with the technology, which may reflect varying levels of technological literacy among students.
Examining the Effects of Multimodal CALL-Based Vowel Training on American English Vowel Production #4597
Japanese EFL learners frequently experience difficulty producing American English (AE) vowels, which are perceptually assimilated into native Japanese categories. While previous research has shown that auditory-based training can improve vowel identification, few studies have examined whether multimodal CALL-based training leads to measurable changes in learners’ vowel production. This study extends our earlier work using JSpace, an interactive web-based vowel-space training tool that integrates auditory input with visual representations of vowel quality and explores its role in supporting L2 phonological restructuring through multimodal input. Sixty Japanese university students were assigned to either a multimodal JSpace training group or a control group receiving traditional auditory-only identification training during a four-week period. Both groups completed identical pretest and posttest discrimination and production tasks targeting seven AE vowels presented in CVC contexts. In addition to perceptual measures, this study focuses on an acoustic analysis of learners’ vowel productions. Changes in first and second formant (F1–F2) frequencies were examined to assess pre- to post-training shifts in the acoustic vowel space. Results show that learners trained with JSpace exhibited greater shifts toward native AE vowel targets than the control group, indicating systematic restructuring of L2 vowel production. Pedagogical implications for CALL-based pronunciation instruction are discussed.
AI as Scaffold, Not Shortcut: Task Design for AI-Supported Oral English Classes #4598
This presentation explores a practical, classroom-tested approach to working with AI rather than against it in Oral English courses. Acknowledging that students already use AI tools, we argue that technological change reshapes learning demands rather than eliminating learning itself. A common classroom challenge is that students often focus on producing a “perfect” written script for presentations rather than developing spoken fluency. This emphasis encourages memorization, which can increase anxiety and reduce students’ confidence when speaking spontaneously. To address this issue, we redesigned a speaking task that prioritizes human cognition before AI involvement. Students first research authentic data, develop their own ideas and sentences, and organize speaking notes independently. Only after this stage is AI introduced through clearly bounded tools rather than a single solution. NotebookLM supports source-based research within transparent constraints, while ChatGPT provides prompt-driven language scaffolding focused on clarity, structure, and delivery rather than content generation. Together, these tools form an ethical workflow that reduces anxiety and improves speaking performance. The presentation concludes by discussing implications for assessment design, AI literacy, and teacher mediation in CALL, emphasizing that effective pedagogy lies not in banning AI, but in teaching students when, why, and how to use it responsibly and ethically.
Using AI-Mediated Video and Chatbot Tasks to Scaffold Speaking in CLIL: A Pilot Study #4599
This pilot study explores how technology-mediated speaking tasks can support the development of oral summarization in a CLIL (Content and Language Integrated Learning) course at a Japanese university. Using EnglishCentral as a digital platform, first-year EFL students engaged with video content and completed repeated oral summarization tasks over the course of a semester. In addition to video viewing and vocabulary-learning activities, learners interacted with MIMI, EnglishCentral’s AI discussion chatbot, to rehearse ideas and language prior to recording spoken summaries. The project investigates how the combination of authentic video input, AI-mediated interaction, and repeated recording tasks may support students’ emerging summarization skills. Particular attention is given to learners’ use of content-specific vocabulary, complexity of spoken language, and overall comprehensibility. As this study represents an initial investigation, qualitative analysis centers on six students who completed all required summary recordings. Audio recordings and platform data are examined to identify patterns of development across time. While large-scale statistical claims are not the goal of this phase, the study aims to demonstrate how CALL tools can scaffold spoken output in cognitively demanding CLIL contexts. Findings will inform the design of a subsequent larger-scale study investigating longitudinal development of oral summarization through AI-supported CLIL tasks.
How Students Prompt AI in EFL Writing: Evidence from a Japanese University Course #4600
Generative AI tools such as ChatGPT are increasingly used in university EFL writing classes, but teachers need clearer guidance on how to help students use AI in ways that support learning without replacing student writing. This six-week classroom-based study examined how students phrase requests to AI for writing help in a first-year Japanese university EFL academic writing course (N = 40). Students completed a process-writing assignment, timed writing tasks without AI in Weeks 1 and 6, and brief surveys. During AI-permitted stages, students submitted short prompt logs documenting their ChatGPT interactions. Analysis showed that students most often used AI for language-level support, especially grammar correction, vocabulary help, and translation. Higher-order prompts related to organization, explanation, and evaluation were less frequent but associated with deeper revisions and clearer understanding of appropriate AI use. Requests for text generation were rare but aligned with greater uncertainty about acceptable use. These findings suggest that the educational value of AI depends less on tool access and more on how students are guided to prompt it. This session will interest EFL writing instructors seeking practical, classroom-based guidance on shaping student AI interactions to support learning.
Using AI Tools to Create and Analyse Departmental Placement Tests #4601
Many universities outsource placement to “off-the-shelf” tests, but AI now lets schools produce localized, tailor-made assessment. This entry-level “how-to” presentation shows how AI tools were used to simplify test design processes. The presenters share their experience producing a new 50-item multiple choice format placement test for 300 students, measuring listening, grammar, and reading skills. The presenters begin by outlining the motivations for creating the new test, continue by describing prompt engineering techniques used to generate items, and finally discuss how AI and test makers can collaborate ethically and productively. The presenters created prompts for each section, using LLMs (like ChatGPT or Claude). Items created were then human-reviewed and edited. Following a pilot test and prior to implementation, further edits were made with human-AI collaboration. Through trial and error, the new test showed improvement across multiple metrics, including item quality (ID), and test reliability (from α=0.8 to α=0.87). Items and content that did not perform well in-context were easily changed. These results were achieved in a much shorter timeframe than usual (about 50%). This project serves as a blueprint for how departments can reduce development time, increase quality, and subsequently better assess their own target student population.
Lessons Learned From H5P: Creating Interactive Learning in Moodle #4603
This presentation reflects on the use of H5P, an open-source plugin for Moodle, to create engaging and inclusive activities for university English learners. Aimed at teachers new to H5P, it demonstrates step by step how Moodle can become an interactive and adaptive learning space through multimodal activities, supporting diverse learners.
Drawing on the presenter’s experience integrating H5P into an undergraduate English writing and oral presentation course at a Japanese university, together with insights from recent literature, the session highlights both the successes and limitations of H5P implementation. Examples, such as interactive videos and branching activities illustrate how H5P can enhance learning experiences, motivation, learner autonomy and feedback opportunities. The presentation also addresses challenges including time investment, usability constraints, and content or design overload, reflecting on how and why some implementations succeeded while others did not.
Rather than presenting H5P as a one-size fits all solution, the session offers an honest practice-based reflection on what worked, what did not and lessons learned for future CALL practice. Participants will gain practical insights into how to avoid common pitfalls and adopt H5P effectively to support more active and inclusive learning in Moodle-supported language classrooms.
The Virtual Frontier: Social Interaction, Anonymity, and the Limits of VR for CALL #4604
This presentation examines social interaction in a public virtual reality (VR) chatroom and its implications for Computer-Assisted Language Learning (CALL). The study was conducted in Gatsby’s Bar, a popular social space within Meta Horizons. Using a constructivist qualitative approach, data were collected through participant observation across two sessions and three semi-structured interviews with regular Horizons users, supplemented by recorded conversational excerpts and field notes.
The study explores what EFL learners may encounter in a typical public VR environment. Findings show that interaction is shaped by anonymity, identity performance, and platform affordances. Conversations ranged from supportive exchanges about family, health, and shared interests to sexualized, antagonistic, and racially charged discourse. Interaction extended beyond speech to embodied features such as proximity settings, shared seating, interactive props, and moderation systems.
While freedom of identity construction enables meaningful connection, it also permits harassment and ethically unstable interaction. For CALL, open public VR chatrooms are not appropriate for most learners. Educational use of VR requires designed, scaffolded spaces—for example, moderated private classrooms, task-based interaction zones, and instructor-controlled safety settings. Practical implications for educators will be discussed.
GenAI vs. Human Feedback on L2 PhD Students’ English Academic Writing #4605
Generative artificial intelligence (GenAI) is transforming the L2 writing landscape, particularly in feedback provision. While some studies report positive effects of GenAI feedback on formal aspects of undergraduate writing, few have examined its impact on the academic writing of postgraduates. Addressing this gap, the present study investigates how GenAI- and human-generated feedback affect L2 doctoral students’ English writing. By analyzing scores for two versions of academic paper abstracts (N=71) and comparing the quantity and quality of feedback, the study found: First, both forms of feedback led to overall improvements in writing scores. Second, AI feedback tended to focus on sentence-level issues, whereas human feedback addressed more discourse-level concerns. This distinction was especially evident in the treatment of the second and third of the five major rhetorical moves in abstract writing: identifying the research gap and stating the research aim. Third, AI feedback was typically affirmative, and instilled confidence in revisions, while human feedback often consisted of questions, hints, and suggestions that served as pedagogical guides, fostering critical thinking and learning throughout the writing process. The findings reaffirm the essential roles of both human involvement and modern technologies in enhancing the quality of English language education, underscoring their pedagogical significance.
Preparing students of all proficiency levels for VE and COIL #4606
Virtual Exchange (VE), such as Collaborative Online International Learning (COIL) projects, can offer equitable, inclusive, and transformative learning opportunities. In recent years, technology for VE has become more accessible both in terms of price and user interface, while professional development opportunities for implementing VE are widely available. Despite this, many students lack confidence to engage with offered VE initiatives: they may choose not to participate in synchronous VE due to language anxiety, lack of digital skills, or concern surrounding intercultural communication. In this talk, we share the findings from our survey of 15 experienced international educators on the skills necessary for VE, and post synchronous VE course feedback from 18 of our own students on issues they faced. Responding to these findings, we created task-based materials which target essential VE skills, including listening comprehension for Global Englishes; conversation strategies for nuanced interactions both face-to-face and online; digital literacy skills; and explicit cultural considerations and reflections. In this session we share these insights and offer advice on how we prepare our students for successful engagement in VE programs. Participants will receive a complete lesson targeting some of the skills that we identified.