Presentation Ethics and Policy in CALL practice
Turn It Off: The Hidden Costs of AI Policing
This paper argues that the use of AI detectors carries significant hidden costs. These harms fall into four broad categories: harms to personhood, harms to knowledge, harms to institutional justice, and harms to relationships. Focusing on AI detectors in academic integrity contexts, I examine how these systems risk false accusation, shift the burden of proof onto students, and intensify the unequal patterns of surveillance already borne by multilingual writers, disabled students, and students of color. Drawing on recent research on AI detector inaccuracy and bias, as well as institutional decisions such as Vanderbilt University’s disabling of AI detection, I argue that these tools are not merely imperfect but ethically dangerous. Methodologically, the paper takes a philosophical approach at the intersection of the philosophy of AI and the philosophy of education, using a utilitarian framework to identify the predictable harms produced when AI detectors probabilistic outputs are treated as evidence and educator suspicion becomes grounds for accusation. The paper concludes that we should not build unreliable and discriminatory policing mechanisms into educational life. Humane responses to generative AI should move away from policing and toward assessment design and integrity practices grounded in transparency, accessibility, trust, and pedagogical care.