By Jackie Vickery
Artificial intelligence (AI) has positioned itself at the forefront of scholarly discourse, reshaping academic, ethical, and policy discussions across college campuses nationwide. At Cornell University in Ithaca, NY, staff and faculty are grappling with questions about academic integrity and AI use in coursework. Meanwhile, neighboring Ithaca College’s Park Center for Independent Media has adopted a different strategy by integrating critical AI literacy into its broader media curriculum. These parallel conversations reveal two distinct yet essential responses to the same issue. Cornell professors are addressing immediate pedagogical challenges by revising syllabi and exam structures, while scholars at Ithaca College warn that those same AI tools risk transforming classrooms into corporate data mines, threatening education’s very purpose as a protected space for intellectual risk-taking and critical inquiry.
IC Invites Critical Media Scholars to Discuss AI’s Role in Education
On Oct. 30, IC’s Park Center for Independent Media hosted a master class with Nolan Higdon and Allison Butler titled “Grappling with Privacy and AI Literacy: Making Sense of Surveillance Technologies and GenAI in Education.” Higdon is a founding member of the Critical Media Literacy Conference of the Americas, a Project Censored national judge, author and university lecturer at Merrill College and the Education Department at the University of California, Santa Cruz. Butler is a senior lecturer, associate chair and director of the Media Literacy Certificate Program in the Department of Communication at the University of Massachusetts Amherst, where she teaches courses on critical media literacy and representations of education in the media.
This is not the first time their work has appeared on IC’s campus. Their 2022 book, The Media and Me: A Guide to Critical Media Literacy for Young People, co-authored with Andy Lee Roth, Ben Boyington and Mickey Huff, et al, has been integrated into a first-year seminar course titled “The Media and Me: An Introduction to Critical Media Literacy.”
Higdon and Butler opened the master class by challenging how most people view GenAI. Rather than treating AI as either a savior or a threat, they advocate for a critical understanding of these technologies, their corporate ownership and their implications for privacy, education and democracy. Higdon acknowledged AI’s potential while critiquing its current trajectory. “I think what’s called AI right now is a really impressive human achievement, and I think there’s a lot of good that can come from it, like any other tool,” he said. “If we use it in the right way, we can better people’s lives.” The problem, he explained, is that “we’ve allowed these tools to be monopolized by a handful of greedy corporations who put the profit motive for themselves over the good of the people and their anti-democracy.” Butler echoed that sentiment, explaining how society currently exists in a “murky middle,” caught between panic that all students are cheating and blind optimism that AI will solve every curricular concern.
AI is already integrated into everyday life through smartphone assistants, search algorithms, and autocomplete, often without people realizing it. Butler cited research showing that AI bots incorrectly summarize articles 45 percent of the time. She explained how AI systems “hallucinate,” generating convincing but false information, from fabricated legal cases to nonexistent academic research. The core issue, Butler stressed, is that AI cannot create new knowledge; it only replicates and recombines existing information that the system scrubbed from the web, often in ways that distort the original meaning or generate fictional content disguised as fact.
These flawed results worsen when AI systems reproduce the biases and prejudices of the creators and the training data. Higdon and Butler pointed to examples like Elon Musk’s Grok AI and note-taking software that fails to understand certain accents, in turn producing and perpetuating racist responses.
They also highlighted AI’s environmental costs, noting that energy-intensive data centers are disproportionately located in poor and marginalized communities and communities of color, where residents pay higher electricity rates to subsidize the facilities and endure heightened environmental and climatic burdens.
Higdon and Butler shifted focus to what they termed “surveillance education” — the transformation of academic spaces into data collection operations. Educational platforms collect vast amounts of student information, which companies sell to advertisers and marketers. They identified specific platforms like Turnitin, which captures student writing patterns and sells that data to advertisers. Recording software like Glean and Genio may violate consent laws and demonstrate racial bias in accent recognition. Learning management systems like Canvas enable extensive data harvesting. Even social media platforms students use daily, like Snapchat, claim ownership of users’ faces according to their terms of service.
Higdon said this surveillance changes what classrooms are for. Academic environments should be private spaces where students can ask questions and make mistakes without fear that every interaction will be recorded or potentially used against them later.
Higdon and Butler linked AI concerns to the larger information ecosystem. Fake news is not new, they noted, but smartphones, social media and AI have amplified its scale and speed. The critical media scholars reject both uncritical adoption and blanket rejection of GenAI. Instead, they call for examining the power structures behind these technologies. Understanding corporate ownership, profit incentives, and who benefit from surveillance is essential for informed decision-making and effective policy advocacy that prioritizes public welfare over corporate interests.
Cornell Professors Discuss AI’s Role in the Classroom
Concurrently, on Cornell’s campus, professors are taking individualized approaches to AI policy, with responses ranging from wary skepticism to measured optimism.
On Oct. 17, Jessica Ratcliff from the Department of Science and Technology Studies and Adam Smith from the Department of Anthropology co-hosted the first roundtable discussion for “A-Why?” — a faculty group they founded to examine AI’s impact on humanities research and education. The event, titled “Using AI in Humanities Research,” featured five Cornell humanities professors discussing how AI has influenced research, education and their respective fields.
Some professors at Cornell are responding to AI by returning to traditional assessment methods. Ratcliff explained that A-Why? aims to tackle AI concerns collectively, hoping to develop “useful studies, guides, experiments [and] policy suggestions” while changing campus discourse about AI. Her classroom policy takes a protective stance toward traditional learning methods. She plans to restructure her courses with fewer papers and more in-class exams, even reintroducing memorization of historical facts as a safeguard against AI’s factual unreliability.
“Technological change only produces social progress through struggle, resistance and regulation, and we really need that in the present moment at the university level and beyond,” Ratcliff said.
Hadas Ritz from the Sibley School of Mechanical and Aerospace Engineering takes a different approach, emphasizing academic honesty over restriction. Her policy encourages students to use any resources for homework — including AI — if they cite their sources, whether that is ChatGPT, Chegg or other classmates. She relies on in-person exams to assess student comprehension of course material, treating homework as an opportunity to practice rather than a graded assessment. “If they’re not putting in the effort, they are only cheating their own understanding, and that is going to show on the exams,” Ritz said.
Jan Burzlaff from the Program of Jewish Studies offered a more integrated approach to GenAI in education. Rather than viewing AI as either a threat or a solution, he describes it as “a sparring partner and a smart but flawed collaborator.” He believes AI can deepen education if approached critically, making students more aware of their thinking processes as they “see what comes easily to a machine and what still requires human nuance.” His spring 2026 course on Holocaust survivor testimonies will make AI analysis central to the curriculum — students will use tools like ChatGPT, Gemini and Claude to analyze testimonies, then critique what the technology misses. “The aim there is to reach discernment — not prohibition,” Burzlaff said. He warns against “one-size-fits-all policy” approaches, noting that AI affects different fields in various ways.
“AI won’t replace the university, but it will test whether we still believe in what a university is for: slow thought, uncertainty and the shared work of meaning-making,” he said. “The challenge isn’t to ban the machine — it’s to stay more human than it is.”
Two Conversations, One Challenge
IC’s master class on surveillance technologies and AI in education and Cornell’s faculty panel show two sides of the same coin: how higher education should respond to technologies that are reshaping not only classrooms but the way students learn, think and produce knowledge. At IC, Higdon and Butler urge educators to question who controls AI, what data it collects, and how it alters the relationship between students, teachers and institutions. At Cornell, faculty members like Ratcliff, Ritz, and Burzlaff confront these issues with policy and practice — deciding how to teach, grade and sustain integrity in an AI-saturated world when algorithms can generate competent-looking work in seconds.
Yet the gap between these two conversations matters. Cornell’s focus on syllabus redesign and exam strategies, while necessary, risks treating AI as merely a cheating problem rather than what Higdon and Butler deem a surveillance infrastructure that is transforming education’s purpose. Burzlaff’s approach offers a bridge between these two scholarly discussions. He addresses both the practical questions of how to teach with AI and the critical question of what AI misses.
This is the kind of discernment both conversations demand: not blind resistance or naive adoption, but the capacity to use these tools while recognizing their limits, biases and the interests they serve. The future of education will depend not on choosing between aversion and assimilation, but on cultivating media-literate judgment. Our task now is to ensure that these tools remain utilitarian by serving human purposes rather than replacing them and to protect academic spaces where questioning, uncertainty and creativity should still flourish.
Jackie Vickery is a senior at Ithaca College’s Roy H. Park School of Communications, pursuing a Bachelor of Arts in Journalism with minors in Photography and Religious Studies. She is also a student researcher for the Park Center for Independent Media and is expected to graduate in May 2026.