Sessions / Second language acquisition (SLA) theory and CALL

CBLP 2.0: Moving from Open-Ended Prompts to "Hyper-Local" Corpora in Language Learning #4614

Time Not Set

For decades, Corpus-Based Language Pedagogy (CBLP) promised to transform learners into researchers discovering language patterns inductively. However, these projects often "fail" to gain widespread classroom traction due to unintuitive interfaces and high technical literacy requirements. This presentation explores whether Generative AI allows this methodology to finally "prevail" via "CBLP 2.0." GenAI democratizes linguistic data, allowing learners to analyze collocations without the steep learning curve of traditional concordancers.

Yet, unbounded Large Language Models introduce new risks: linguistic "hallucinations" and inappropriate registers from generalized training data. To mitigate this, we propose shifting from generic "Prompt Engineering" to building "Hyper-Local Corpora" using Retrieval-Augmented Generation (RAG). Using accessible tools like Google’s NotebookLM, educators and students can upload bespoke, genre-specific texts to create a "walled garden" corpus. This session demonstrates how grounding AI in curated datasets provides GenAI's instant feedback while preserving the precise, evidence-based inquiry of traditional corpus linguistics. Ultimately, sustainable CALL success depends on bounding AI to guide students from passive consumption to active, ethical research.

Beyond the Summary Trap: Reframing Literacy and Competency in AI-Mediated Language Learning #4620

Time Not Set

In the era of large language models (LLMs), Computer-Assisted Language Learning (CALL) has shifted from tool-use to complex human-AI partnership. However, anecdotal evidence highlights that the prevalence of fragmented AI outputs—the "summary trap"—poses a significant threat to critical literacy and deep cognitive and emotional engagement. This talk outlines a pilot study in which a dual-framework approach to AI integration in higher education, specifically focusing on university contexts in the Philippines and Japan. By distinguishing between AI Literacy (metaliteracy and critical discernment of algorithmic bias) and AI Competency (the strategic orchestration of LLMs within a Human-in-the-Loop workflow), this research explores how learners can maintain language learner agency. Integrating perspectives from Human-Computer Interaction (HCI) and Cultural Anthropology, this talk explores the local concepts of Ginhawa (Philippines) and Ikigai (Japan) as ethical anchors for digital mindfulness, highlighting the necessity of "whole-text" pedagogy to combat cognitive narrowing, ensuring that language learners develop the analytical thinking and resilience required in a generative AI labor market.

The Revised Bloom’s Taxonomy Framework in EFL: Pedagogical Applications for Students’ Critical Thinking Development #4624

Time Not Set

The rise of EFL students’ GenAI-like writings has raised concerns about their critical thinking abilities. The revised Taxonomy of Educational Objectives (Krathwohl, 2002) offers a recognized framework for cultivating those cognitive abilities. While existing papers focus on the use of GenAI tools and its connection with Bloom’s taxonomy, the critical gap remains in understanding how to enhance EFL students’ critical thinking such as evaluation and creation in the age of GenAI. Grounded in a socio-cognitive framework, this study examines how EFL students’ higher-order thinking develops through writing tasks when spontaneous student-instructor conversations - Interactive Oral Assessments (IOAs) guided by the revised taxonomy, are used. A focus group of ten second-year English majors in an EFL Writing course participated in this qualitative project. Data from entry and exit meetings, audio-recordings, semi-structured interviews, writing samples, AI chat history were analyzed. A finding suggests that the intervention of IOA positively influences the students’ ability to analyze and evaluate the AI-generated content as critical steps in scaffolding their writing assignments. Additionally, emphasizing higher-order thinking processes has increased original argumentations in student works. The study indicates IOA is a potential assessment tool to raise EFL students’ awareness on how to use AI critically.

Gen AI and Second Language Writing A Corpus-Based Multidimensional Study of Engineering Undergraduates #4640

Time Not Set

The emergence of Generative AI (GenAI) has significantly transformed the learning and writing practices of students in higher education. With the support of advanced large language models (LLMs) like GPT-4, students can now complete course assignments with greater quality and efficiency. Previous studies have extensively examined the situational and linguistic differences between human-written and AI-generated texts across various registers, highlighting the fundamental linguistic distinctions between the two (Goulart et al., 2024; Barabara et al., 2024). This study analyzes a corpus of final-year research reports written by undergraduate engineering students from an L2 context, aiming to identify dimensional variations in texts with differing levels of AI intervention (measured by AI scores) following the release of ChatGPT. Employing both qualitative and quantitative methods, we applied Biber’s (1998) multidimensional framework to examine lexico-grammatical features and distinguish AI-mediated texts at varying degrees from original student-authored reports. The findings reveal that reports with high and low levels of AI intervention differ in 19 linguistic features across three textual dimensions. Compared to Biber’s established genre profiles, the low-AI intervention group aligned more closely with academic writing, whereas the high-AI intervention group exhibited features resembling non-academic genres.

A Session-Level Framework for Analyzing Learner Engagement in Student–AI Interaction #4641

Time Not Set

Research on AI-based conversational tools has consistently reported positive learner perceptions. However, this alone provides limited insight into whether AI use meaningfully supports language learning. Although some studies have compared AI-assisted and non-AI conditions, this research often offers little explanation of how learners engage with AI during interaction.

This study moves beyond perception- and outcome-focused evaluation by examining what learners do during AI-mediated interaction itself. The study introduces L-CARES (Learner-Centric Analysis of Response and Engagement Sequences), a session-level analytical framework designed to capture observable learner engagement across complete student–AI dialogue sessions. Using transcript data from 18 first-year Japanese English majors who completed weekly chatbot role-play tasks across two academic terms (22 sessions total), L-CARES examines patterns of contingency, agency, attentional repair, elaboration, discourse management, and self-monitoring.

Preliminary application of the framework demonstrates how session-level engagement patterns can be identified and compared across AI-mediated interactions, helping explain variability in learning trajectories despite consistently positive learner perceptions. Rather than evaluating effectiveness in terms of access or usage frequency, the framework foregrounds how learners interact with AI within task constraints. The presentation concludes by discussing implications for CALL research and classroom practice.

Enhancing Chinese EFL Learners’ Connected Speech Through AI-integrated Training #4677

Time Not Set

Recent years have seen increasing integration of AI tools in pronunciation training with proven effectiveness. However, their application to connected speech processes (CSPs) remains underexplored. English CSPs, which involve various types of sound adjustments, are widely used by native speakers in daily communication but pose substantial challenges for Chinese EFL learners in both perception and production. This study aims to develop and evaluate an AI-enhanced CSP training package comprising three components: explicit instruction, perception practice, and production practice. The package integrated materials generated by Murf (a text-to-speech tool) and feedback from Doubao (a generative AI chatbot). Its effectiveness was evaluated by comparing 18 intermediate-level Chinese university students with Northern Mandarin as L1 on sentence dictation and sentence reading-aloud tasks before and after eight online training sessions. Results indicated that participants’ perception and production of CSPs improved significantly (p < .001). However, the training effects exhibited an asymmetrical pattern across the six target CSP types (i.e., consonant-vowel linking, elision, vowel-vowel linking, assimilation, vowel reduction, and multiple) and between perception and production tasks. These findings support incorporating CSP instruction into regular English classrooms and highlight the potential benefits of strategic AI tool implementation in CSP-focused pronunciation training.

The Hidden Social Work of Chatbots #4515

Time Not Set

As AI chatbots become more common in language classrooms, their impact is often discussed in terms of accuracy or efficiency. Less attention has been paid to the social work these systems perform during interaction. This presentation explores how classroom chatbots, when designed as peer-like conversational partners, appear to shape student risk-taking, voice, and participation in ways that differ from typical peer discussion.

Drawing on a year-long classroom implementation with Japanese university learners, I present observations from persona-based AI chatbots used to support English communication. Examples from chat logs, student reflections, and classroom practice suggest that students often experience these interactions as less face-threatening and more generative than peer interaction, particularly during early idea exploration. These patterns align with long-standing SLA concerns regarding affective barriers, classroom silence, and reluctance to take communicative risks. (Harumi, 2011; Curry & Peeters, 2025)

To interpret these patterns, I introduce AI Pragmatic Mediation (APM), a framework for examining how AI systems influence user stance and communicative choices, and Dyadence, a two-phase human-AI co-thinking process involving exploratory dialogue followed by synthesis. Rather than treating chatbots as neutral tools, these frameworks offer lenses for understanding how interactional design may shape learner engagement.

Roles of Task Complexity and Language Proficiency in AI Chatbot-Mediated English Speaking Task #4535

Time Not Set

Previous research on technology-mediated task-based language teaching has primarily examined how technological tools and task design influence English speaking performance. However, limited attention has been given to how varying levels of task complexity interact with learners’ language proficiency. Accordingly, this study investigates the effects of task complexity and language proficiency on English speaking performance, speaking anxiety, and willingness to communicate among Taiwanese undergraduate students in AI chatbot-mediated speaking tasks. A total of 160 undergraduates participated. Eighty students with low English proficiency were randomly assigned to either a low-proficiency simple-task group or a low-proficiency complex-task group (40 students each). Another 80 students with high English proficiency were randomly assigned to a high-proficiency simple-task group or a high-proficiency complex-task group (40 students each). Data were collected through pre- and post-speaking tests, pre- and post-surveys, and semi-structured interviews. The expected results will demonstrate that different levels of AI chatbot-mediated speaking task complexity differentially affect learners’ speaking performance across proficiency levels, while also reducing speaking anxiety and enhancing willingness to communicate. Interview results will further indicate generally positive learner perceptions toward integrating AI chatbots into speaking tasks. Overall, this study offers pedagogical insights for the design of AI-mediated task-based speaking instruction.

L2 learners’ strategic engagement with machine translation in narrative writing #4537

Time Not Set

This study investigates how four female Mandarin Chinese-speaking L2 learners at a U.S. university engage with machine translation (MT) in English writing and develop strategies to manage its affordances and limitations. Participants completed both MT-assisted and non-MT-assisted writing tasks. Guided by Activity Theory, the study qualitatively examines MT as a mediating artifact that shapes writing processes through technological affordances, academic norms, and learner agency. The findings show that while most participants relied on MT for grammatical and lexical support, one learner critically evaluated its limitations and adopted alternative strategies, including peer feedback and writing center consultations. These results highlight the diverse and strategic ways L2 learners engage with MT and suggest that tensions in MT use can promote adaptive learning. The study offers pedagogical implications for integrating MT to foster learner autonomy and metalinguistic awareness.

Developing an N-gram–Based Spoken Lecture Corpus Tool to Support Non-Native EMI Teachers #4547

Time Not Set

Many non-native English-speaking instructors face linguistic challenges when delivering English-medium instruction (EMI), particularly in using discipline-appropriate spoken academic phraseology. This paper presents the development of an n-gram–based spoken lecture corpus tool designed to support EMI teachers through large-scale lecture data.

The corpus was compiled from approximately 1,100 open-source academic lecture transcripts across multiple universities and disciplines, resulting in about eight million words. The tool enables users to search four- to six-word n-grams extracted from authentic lectures. Given a target word or phrase, the system generates frequent n-gram patterns and provides multiple contextualized examples from real lecture transcripts, allowing users to observe how academic language is used in spoken teaching contexts.

The tool was introduced to a group of university instructors teaching EMI courses. Informal feedback indicates that the system was perceived as intuitive and useful for lecture preparation, phrase selection, and increasing confidence in English delivery. Participants particularly valued access to spoken academic patterns that are rarely addressed in conventional EMI training materials.

Although large-scale evaluation has not yet been conducted, this study demonstrates the potential of repurposing open lecture transcripts into practical corpus-based support tools and highlights the pedagogical value of n-gram exploration for EMI teacher development.

Comparing Collaborative and Individual Writing in an EFL Academic Writing Course: A Corpus-Based Analysis #4577

Time Not Set

This corpus study uses a quasi-experimental within-subjects design to compare student writing performance across collaborative and individual writing conditions using measures from the CAF (Complexity, Accuracy, Fluency) framework. Previous research suggests that collaboratively written texts are generally shorter and more accurate than texts written individually. However, it remains unclear which specific linguistic features are affected by collaborative writing (CW), possibly due to substantial variation in how written output has been measured across studies. The present study constructs a learner corpus to compare collaborative and individual writing using measures from the CAF framework. In addition, specific linguistic features, such as the accuracy of clausal subordination (e.g., conjunction use), are analyzed. The data are drawn from thirty-eight first-year Engineering students from two sections of an English academic writing course at a private Japanese university during the fall semester. Over the semester, students completed eleven in-class paragraphs collaboratively and eleven paragraphs individually for homework on different topics, resulting in a learner corpus of writing produced under both collaborative and individual conditions. The poster will also discuss methodological decisions involved in constructing the learner corpus, including issues related to digitization and annotation. Preliminary analyses of the data will be presented.

Interactive Oral Assessment: A Catalyst for Enhancing Academic Integrity and Critical thinking in AI-assisted learning #4593

Time Not Set

This qualitative study aims to explore Interactive Oral Assessment (IOA) as the real-time oral interaction in helping students critically respond to AI-generated contents, and to synthesize and reconstruct their written work beyond AI suggestions. The IOA questions, designed using the six-level scale from Bloom’s taxonomy, are considered to enhance academic integrity among EFL learners in their writing classes. Ten second-year English majors at a university in Vietnam participated in this study. Data from student reflections, semi-structured interviews, students’ writing samples, and AI usage history were analyzed. Findings suggest that IOA positively influences students’ responsibility regarding their use of AI-powered tools to aid their writing processes. Rather than solely relying on AI to generate answers related to vocabulary or sentence structures (Level 1- Bloom Taxonomy), participants began to prompt questions that reveal their ability to analyze or evaluate the ideas provided (Level 3 or 4). Additionally, the study confirms the positive impact of IOA in enhancing the originality of student work. The importance of this study lies in its suggestion to develop IOA as a robust assessment framework that leverages AI-powered writing tools while upholding academic integrity—emphasizing students’ responsibility and ownership in the EFL writing classroom.