#4551

Presentation Machine Learning in CALL

From Detection to Instruction: Developing EFL Pragmatic Competence Through Human-AI Comparative Analysis of Multimodal Sarcasm

Time not set

This research-based presentation reports on a pedagogical intervention using human-AI performance comparisons to teach sarcasm detection to Japanese EFL learners. Initial findings revealed native speakers achieved only 60% accuracy while non-native speakers performed at chance levels (51%) on 200 memes, with AI models achieving comparable performance (54-57%). These insights informed a three-phase instructional framework implemented across 7-8 sessions with 34 participants using pre/post quasi-experimental design. Phase 1 built awareness of semantic-pragmatic incongruity, Phase 2 employed computational pattern analysis to identify linguistic markers, and Phase 3 addressed ambiguous cases where human-model disagreement was highest. Preliminary results suggest systematic exposure to computationally-identified patterns combined with explicit metalinguistic instruction improves pragmatic competence in digital contexts. The presentation demonstrates how NLP model analysis can identify specific error patterns to guide targeted instruction, while highlighting the continued importance of human cultural expertise. Participants will learn practical strategies for integrating computational insights into pragmatic instruction and receive access to our validated multimodal corpus. This work contributes to CALL by bridging computational linguistics with pedagogical practice, offering evidence-based approaches for teaching challenging pragmatic features in technology-mediated environments.