Sessions / Extended reality (XR) in CALL

Student Engagement in Interactive Language Activities through VR #4616

Time Not Set

Recent studies on language learning using virtual reality (VR) have suggested that immersive VR environments using head mounted displays (HMDs) provide realistic simulations for authentic language interactions (Dooly et al., 2023; Lui et al., 2023). However, few studies have examined whether VR leads to better interactive language learning than in-person learning. This study aims to explore how Japanese university students assess their interactive language tasks in an HMD avatar-enhanced immersive VR environment, as opposed to those in a conventional in-person classroom. Students in four undergraduate English courses were encouraged to actively engage in verbal communication tasks grounded in a notional-functional approach to improve their communication skills, while working on pronunciation, speaking fluency and vocabulary in a real-time interactive setting. The findings from the questionnaire, which was based on the intrinsic motivation inventory and aligned with the principles of self-determination theory (Deci & Ryan, 1985), suggest that employing HMD VR as an instructional intervention in the classroom is effective in improving students’ motivation and engagement and easing feelings of discomfort when speaking. The poster presentation also discusses the potential of this method for effective differentiated instruction and addresses some major challenges students faced in the VR mode in this study.

Connecting VR Immersive Experiences with L2 Narrative Writing Tasks: Emotional Perspectives and Pedagogical Issues #4651

Time Not Set

L2 writing process has been analyzed using cognitive process models of planning, translating, and reviewing (Flower & Hayes, 1981), or through sociocultural lenses as a mediated activity influenced by discursive genre and writer identity (Canagarajah, 2018; Lea & Jones, 2011; Matsuda, 2015). Although research has emphasized the importance of emotions and motivation for L2 learners, the influence of emotional resources on L2 writing remains underexplored. This study examines the effects of emotionally differentiated narrative tasks on linguistic complexity and emotional engagement in L2 writing. 41 intermediate French-as-a-foreign-language (FFL) learners at an Emirati university completed three narrative tasks over three weeks, using memory-based and immersive 2D/3D VR stimuli. Presence (ITC-SOPI) and affect (PANAS) questionnaires were administered. 123 texts (average 200 words) were analyzed with MAXQDA to measure linguistic complexity, sensory density, and emotionality. Results showed the role of emotions in the writing process and performance (Feng & Ng, 2023; Guan et al., 2024), as well as the value of VR-supported L2 writing pedagogy (Makransky & Petersen, 2021). Whereas VR stimuli and stronger emotional involvement encouraged a more personal writing style, high emotional load was associated with reduced coherence and weaker self-regulation. These findings redefine success in VR L2 writing: emotion boosts engagement and voice, yet can impair coherence and self-regulation, requiring calibration and scaffolding.

Preserving the Future of Tradition: AI-Mediated Kodan Storytelling for Intercultural English Education #4487

Time Not Set

At the National Institute of Technology, Hakodate College, a student-led VR Lab project integrated VR and AI tools into a three-day international VR Camp held in March 2026. Participants from multiple countries joined remotely to engage in English-based engineering and cultural learning activities.

This presentation focuses on Day 2 of the program, a collaboration with Ichinoseki College, and professional Kōdan storyteller Kinme Chikufutei. The project was guided by the conceptual framework KATARIUM, proposed by the collaborating teacher, which combines kataru (“to narrate”) with the idea of a museum or medium, framing storytelling as a living, reusable cultural resource rather than a fixed performance.

On-site recordings were conducted using Polycam and spatial recording tools within the ENGAGE VR platform to create Gaussian models of key performance elements, including the shakudai (storytelling table). These assets were reconstructed in a culturally appropriate tatami-stage environment.

To support accessibility for international learners, AI-assisted English voice dubbing preserved vocal quality and narrative rhythm while intentionally retaining a non-native English accent. An interactive AI avatar enabled learners to ask questions related to key vocabulary, historical background, and social context. The presentation outlines a replicable, teacher-centered workflow for sustaining cultural heritage through English education.

Design and Preliminary Evaluation of a Document-Grounded Multimodal AI Teaching Assistant for CLIL in STEM Laboratories #4503

Time Not Set

While XR-based instructional systems have been widely explored in language and STEM education, their use in hands-on laboratory contexts is often limited due to physical constraints. This study addresses this limitation by developing a smartphone-based wearable camera integrated with a vision- and voice-enabled AI that allows learners to share their visual perspective with a verbal AI Teaching Assistant (TA), supporting CLIL in STEM laboratories.

Two complementary systems are presented within the same document-grounded, multimodal interaction framework. In both systems, real-time visual input enables equipment identification and situated interaction. In the electrical engineering context, the TA facilitates procedural English for laboratory actions and conceptual understanding aligned with laboratory materials uploaded to AI. In the mechanical engineering context, the system emphasizes laboratory safety and procedural preparation, supporting discussion of equipment usage and safety practices grounded in uploaded instructional documents.

Preliminary evaluation was conducted, focusing on system reliability, response accuracy, and pedagogical suitability. Results suggest that the camera-mediated, multimodal design supports situated language use and that an encouraging interactional tone reduces learner hesitation when using English in STEM laboratory settings. A large-scale study involving 100 engineering students is planned for April.

EFL Learners’ Perceptions of Speaking Anxiety: In-Person, Online, and Virtual Reality #4567

Time Not Set

Virtual reality (VR) has attracted growing interest in computer-assisted language learning for its potential to provide immersive communication experiences. While prior research has often examined VR as a standalone tool, fewer studies have directly compared learners’ affective perceptions of VR with more established communication modalities. In particular, how foreign language anxiety (FLA) and interactional preferences differ across in-person, online, and VR-based communication remains underexplored. This study reports findings from a survey of 181 Japanese university EFL learners, mostly English majors, examining speaking-related FLA, attitudes toward communication modalities, and partner preferences (friend vs. stranger) across three contexts: in-person interaction, synchronous online communication (e.g., Zoom), and immersive VR. The focus is on learners’ preconceived perceptions rather than post-intervention experiences. Quantitative analyses are complemented by open-ended responses to better understand learners’ reasoning. Overall patterns indicate clear modality-based differences in perceived anxiety, acceptance, and preferred interactional conditions. While in-person communication is strongly favored, technology-mediated environments, particularly VR, appear to influence how learners evaluate social risk and partner choice. The presentation will outline these patterns, discuss implications for affective factors in technology-mediated communication, and consider how modality choice may shape learners' willingness to communicate.

The Virtual Frontier: Social Interaction, Anonymity, and the Limits of VR for CALL #4604

Time Not Set

This presentation examines social interaction in a public virtual reality (VR) chatroom and its implications for Computer-Assisted Language Learning (CALL). The study was conducted in Gatsby’s Bar, a popular social space within Meta Horizons. Using a constructivist qualitative approach, data were collected through participant observation across two sessions and three semi-structured interviews with regular Horizons users, supplemented by recorded conversational excerpts and field notes.

The study explores what EFL learners may encounter in a typical public VR environment. Findings show that interaction is shaped by anonymity, identity performance, and platform affordances. Conversations ranged from supportive exchanges about family, health, and shared interests to sexualized, antagonistic, and racially charged discourse. Interaction extended beyond speech to embodied features such as proximity settings, shared seating, interactive props, and moderation systems.

While freedom of identity construction enables meaningful connection, it also permits harassment and ethically unstable interaction. For CALL, open public VR chatrooms are not appropriate for most learners. Educational use of VR requires designed, scaffolded spaces—for example, moderated private classrooms, task-based interaction zones, and instructor-controlled safety settings. Practical implications for educators will be discussed.