Sessions / Ethics and Policy in CALL practice
Beyond Bans and Blind Trust: Navigating Ethical Boundaries and AI Misuse in Japanese EFL Classrooms #4635
This presentation reports findings from a study examining how Japanese university EFL students use AI tools for academic tasks, how they interpret ethical boundaries, and how emotional and cultural factors shape decision-making. Data from survey responses and open reflections reveal that students want guidance rather than prohibition and view AI as a supportive tool rather than a replacement for learning. The findings highlight both successes and failures in classroom integration, ranging from increased confidence and clarity to academic dishonesty cases and AI misunderstanding of student needs. The presentation connects these classroom realities to broader questions of responsible AI use in CALL and considers what it means to “prevail or fail” when integrating emerging technology into language education.
What do the large-scale data on AI tell?: Exploring students’ perceptions of teachers’ use of AI tools in English language teaching and assessment #4652
Although AI is increasingly used by teachers in university language classes, how students perceive teachers’ use of AI in instruction and assessment remains unclear. This study aims to examine university students’ perceptions of the teacher’s use of AI in course instruction and assessment. A wide survey was conducted in mid-2024. Participants in this mixed-gender study were 990 Japanese EFL undergraduates aged 18 to 23. Among the 55 broad-ranging survey questions, this presentation focuses on the results obtained regarding students’ perceptions of teachers’ use of AI tools in classroom activities and assessments. More than 52% of the respondents reported that GenAI was used in some way in their university English classes, particularly for writing (48%) and reading (39%), whereas 46% of respondents wanted GenAI to be used more in speaking instruction. 65% of students preferred writing assessments by teachers or teachers collaborating with AI over AI assessments. Additionally, approximately 63% reported that they still need to learn English, even if AI replaces many English-language tasks. These findings indicate that both teachers and students should be responsible for the use of AI, providing evidence to support its use in classrooms.
Professional Development as Sensemaking in CALL Teaching Practicum With Generative AI #4684
Teaching practicum is a critical site for professional development, where pre-service English teachers must negotiate professional judgment in response to pedagogical and ethical uncertainty increasingly shaped by generative AI in CALL-related instructional contexts. Adopting a sensemaking perspective—understood as the process through which individuals interpret and respond to uncertain situations—this study examines how final-year pre-service English teachers position generative AI as a pedagogical resource within their emerging professional practice during practicum. Drawing on scenario-based tasks situated in AI-mediated teaching situations and guided reflections collected during practicum, the study explores how participants reason through CALL-related AI dilemmas. The analysis suggests that pre-service teachers engage in ongoing professional sensemaking, positioning generative AI as a contested CALL resource shaped by tensions between supporting student learning, ensuring fairness, and navigating ambiguous institutional expectations. The study conceptualizes professional development as situated judgment grounded in critical professional reasoning and compassion, with implications for CALL-informed teacher education and practicum design.
Human Instruction in the Age of Generative AI: Navigating Academic Integrity and Guidance Gaps in Academic Writing #4513
Generative AI (GenAI) is increasingly used in academic writing (Eleftheriou et al., 2025). Recent studies have examined how GenAI can be incorporated to support students at multiple stages of the writing process and to provide feedback (e.g. Allen & Mizumoto). While these tools offer promising opportunities, they also raise concerns and challenges, particularly regarding academic integrity. Despite growing research in this area, the role of human instructors in academic writing education in the era of GenAI remains underexplored. This study examines the uniqueness of human instruction by investigating the challenges that have emerged with the introduction of GenAI in academic writing and the strategies tutors use to address them during one-on-one sessions. Using a qualitative design, the study employs semi-structured interviews with eight tutors at an academic writing center at a Japanese university. Preliminary findings indicate that ethical concerns and a lack of clear guidance are the primary challenges tutors encounter when students use GenAI in their writing. Follow-up interviews will be conducted to identify emerging challenges and changes in instructional strategies over time. By foregrounding instructors’ experiences, this study aims to provide insights into future directions for human instruction in academic writing and more effective support for students.
Comparative Study on University Level and Student Perception of AI Usage #4521
The advancement of artificial intelligence (AI) has complicated students’ learning, especially in CALL. With students’ usages of AI often preceding proper guidance being provided, it is now crucial for teachers to understand students’ perception towards AI use to better set AI policies accordingly. This presentation gives information about how students from two universities with different academic levels (hensachi) perceive the use of AI in English language classes and assignments based on research conducted earlier this year. The results show the students’ similar willingness to use AI in their coursework and assignments, regardless of the level of university. It is also notable that most participants have positive perceptions of using AI for some uses, such as vocabulary learning, creating practice questions, and speaking practice, while they are more prone to negative perceptions towards creating sentences for writing assignments. Utilizing the research results, this presentation also provides suggestions on how teachers can present and negotiate their AI policies with students in a convincing and satisfying manner, benefiting both teachers and students, and helping teachers prevail in this uncertain era of AI.
Turn It Off: The Hidden Costs of AI Policing #4561
This paper argues that the use of AI detectors carries significant hidden costs. These harms fall into four broad categories: harms to personhood, harms to knowledge, harms to institutional justice, and harms to relationships. Focusing on AI detectors in academic integrity contexts, I examine how these systems risk false accusation, shift the burden of proof onto students, and intensify the unequal patterns of surveillance already borne by multilingual writers, disabled students, and students of color. Drawing on recent research on AI detector inaccuracy and bias, as well as institutional decisions such as Vanderbilt University’s disabling of AI detection, I argue that these tools are not merely imperfect but ethically dangerous. Methodologically, the paper takes a philosophical approach at the intersection of the philosophy of AI and the philosophy of education, using a utilitarian framework to identify the predictable harms produced when AI detectors probabilistic outputs are treated as evidence and educator suspicion becomes grounds for accusation. The paper concludes that we should not build unreliable and discriminatory policing mechanisms into educational life. Humane responses to generative AI should move away from policing and toward assessment design and integrity practices grounded in transparency, accessibility, trust, and pedagogical care.
Perceptions, Acceptance and Use of AI Tools in EFL Learning: Evidence from a Regional University #4589
The increasing availability of artificial intelligence (AI) tools has begun to reshape English as a Foreign Language (EFL) learning in higher education. However, students’ engagement with these tools extends beyond practical use to include evaluative considerations regarding their academic appropriateness. This study adopts a mixed-methods design to investigate EFL students’ perceptions, acceptance, and use of AI tools at a regional university. Quantitative data were collected through a questionnaire administered to approximately 300 undergraduate EFL students, examining multiple dimensions of perceptions, levels of acceptance, and self-reported AI tool use in EFL learning. Descriptive and correlational analyses were conducted to explore relationships among these constructs. Qualitative data from open-ended responses and follow-up interviews further illuminate students’ reasoning when reflecting on AI-supported practices. The results indicate generally mixed perceptions of AI tools, with acceptance and use varying among students. Significant associations were found between perceptions, acceptance, and self-reported use. Qualitative findings reveal that students’ views are shaped by considerations related to academic integrity, perceived educational values, and contextual norms. The study highlights the role of mediating acceptance and contributes to discussions on what constitutes acceptable, legitimate, and responsible AI use in EFL education within regional university contexts.