Poster Presentation Extended reality (XR) in CALL
Design and Preliminary Evaluation of a Document-Grounded Multimodal AI Teaching Assistant for CLIL in STEM Laboratories
While XR-based instructional systems have been widely explored in language and STEM education, their use in hands-on laboratory contexts is often limited due to physical constraints. This study addresses this limitation by developing a smartphone-based wearable camera integrated with a vision- and voice-enabled AI that allows learners to share their visual perspective with a verbal AI Teaching Assistant (TA), supporting CLIL in STEM laboratories.
Two complementary systems are presented within the same document-grounded, multimodal interaction framework. In both systems, real-time visual input enables equipment identification and situated interaction. In the electrical engineering context, the TA facilitates procedural English for laboratory actions and conceptual understanding aligned with laboratory materials uploaded to AI. In the mechanical engineering context, the system emphasizes laboratory safety and procedural preparation, supporting discussion of equipment usage and safety practices grounded in uploaded instructional documents.
Preliminary evaluation was conducted, focusing on system reliability, response accuracy, and pedagogical suitability. Results suggest that the camera-mediated, multimodal design supports situated language use and that an encouraging interactional tone reduces learner hesitation when using English in STEM laboratory settings. A large-scale study involving 100 engineering students is planned for April.
-
Nobuaki Minematsu is a professor of electrical engineering and language education at the University of Tokyo. He has a wide interest in speech communication covering the areas of speech science and speech engineering, especially he has an integrated knowledge on multimedia, ICT, AI, and CALL.