Douglas Emmett
About
No profileSessions
Workshop Tony Starkin’ It in the Shower: Uncovering the Naked Power of AI Voice Assistants more
This hands-on workshop introduces an AI-assisted app development method centered on real-time voice interaction with large language models. Participants will observe and then practice a structured voice-first workflow for early-stage development: articulating constraints and success criteria aloud, iteratively refining requirements, and having the model generate and explain small, usable code components for classroom-ready CALL tools. Voice is used intentionally for ideation and specification, while code generation and implementation steps, such as testing, debugging, and deployment, are demonstrated using standard on-screen workflows. Successful examples, such as a randomized speaking-partner scheduler or an automated PDF region-extraction tool, draw on educator-built systems in Japanese secondary and tertiary settings, with attention to constraints faced by non-specialist programmers and common institutional limitations. Attendees will prototype a simple classroom tool or research utility with the explicit goal of building something they can “use on Monday.” Aligned with the 2026 theme “Prevail or Fail?”, the workshop emphasizes practical judgment: when voice interaction accelerates design and when it introduces friction. Participants leave with a working prototype, a workflow checklist, and reusable prompt templates. Laptop with Wi-Fi required; a smartphone or personal hotspot may be helpful for connectivity redundancy. A phone/headset is recommended.
Poster Presentation Arizona AI: A Longitudinal Study of AI-based AWCF Outcomes in Japanese EFL more
This poster reports final results from a 2025 Panasonic Education Foundation–funded longitudinal study examining the sustained impact of Arizona AI, a researcher-developed AI-mediated automated written corrective feedback (AWCF) tool on English paragraph writing among 120, 2nd-grade Japanese high school students. The study represents the conclusion of interim findings previously presented at the 2025 JALT International Conference in Tokyo, extending earlier analyses to a full instructional cycle. Using repeated baseline–midline–endline measures, the study analyzed changes in human-rated writing performance across multiple rubric categories, including structure, grammar, transitions, and sentence complexity. Results show steady improvement over time, with the strongest gains occurring when AI feedback was embedded within a structured classroom workflow rather than used as a stand-alone intervention. The findings speak directly to the conference theme “Prevail or Fail?” by identifying a central risk in CALL adoption: AI feedback systems that succeed technically but fail pedagogically when learner scaffolding and feedback literacy are insufficient. Implications are discussed for sustainable AI integration in EFL writing instruction, rubric-aligned prompt design, and teacher mediation strategies that enable AI feedback to prevail beyond novelty effects.