Volume 8, Issue 1

Building an Ethical Framework for AI Engagement: A Precursory Overview

In this paper, I explore the intersection of AI literacy, privacy, and digital access in order to voice concerns rooted in societal implications. This critique serves to contribute to the vast study of literacy and technology by advocating for the skills needed to understand AI tools more extensively. With the help of Ted Chiang, Bali, Zeynep Tufekci, Kate Crawford, Ruha Benjamin, and Shoshana Zuboff, I express the need for more engagement with AI focusing on human and machine interactions, privacy, and access to information. This article closes with a broad framework that institutions can consider when considering wide-scale AI integration and adoption.

As artificial intelligence continues to prevail in various aspects of my own life, from Siri interactions to algorithm-driven social media platforms, the need for a new kind of literacy is emerging in our society called critical AI literacy. Critical AI literacy is the ability to understand, question, and engage with AI technologies and their societal impacts. It has emerged in response to AI’s growing influence and being actively promoted by scholars and activists. Professor Maha Bali defines the critically AI literate person as one who is “understanding [of] who designs it, for what purpose, and what assumptions and values are embedded in it” (Bali). Traditional forms of literacy, including reading, writing, and digital literacy, have long shaped how we understand the world. Bali echoes this sentiment when insisting that “AI is not just a technical phenomenon but a social, cultural, and political one” (Bali). Bali argues that to be critically literate about AI technologies, individuals must understand not only the technical workings of these systems but also their ethical, political, and social implications. Accordingly, the troubles that come with new and emerging AI technologies such as Large Language Models (LLMs), facial recognition systems, and data mining algorithms require that we develop new literacies that may aid us in our approaches to A.I. technologies. These approaches must be mindful of how AI impacts privacy, equity, and current digital infrastructure.

The rapid advancement of artificial intelligence (AI) technologies has altered how we engage with the digital world, challenging traditional norms of literacy and privacy. This article investigates how AI literacy impacts society, especially in terms of how individuals interact with digital platforms and navigate digital privacy concerns. Through examining recent articles including Ted Chiang’s "ChatGPT is a Blurry JPEG of the Web," Maha Bali’s "What I Mean When I Say Critical AI Literacy," Zeynep Tufekci’s "Think You’re Discreet Online? Think Again," this paper unfolds the growing worries of AI technologies, including boundaries with humans and machines and the ethical concern of digital surveillance. I argue that by thoughtfully engaging with/in these emerging AI resources, users may re-examine issues surrounding AI, privacy, and access, allowing these technologies to empower individuals rather than exploit them. This article calls for help developing new strategies for creating an inclusive curriculum that addresses these concerns within educational settings, specifically.

Critical Frameworks for AI Literacy

To better understand the main concerns with AI and how to approach them critically, Ted Chiang’s “ChatGPT is a Blurry JPEG of the Web” offers a powerful metaphor for understanding AI’s capabilities and limitations: the blurry JPEG. JPEGs undergo data compression processes that remove some of the data to reduce the file size. This process of lossy compression decreases quality. Chiang’s comparison suggests that while AI outputs may seem coherent and convincing, they often lack true understanding or depth. This metaphor emphasizes a key concern with generative AI: its potential to produce persuasive yet misleading information. Chiang’s metaphor helps show us why critical AI literacy is essential for understanding how AI works, and, accordingly, how its outputs may distort truth, influence opinions, or shape narratives without accountability.

Another area of concern for these new forms of A.I. comes from A.I. surveillance. Zeynep Tufekci’s article “Think You’re Discreet Online? Think Again” explores how AI-driven technologies compromise user privacy by collecting, analyzing, and utilizing vast amounts of data. Tufekci writes, “companies don’t need to know your secrets to guess them with remarkable accuracy.” Tufekci emphasizes that even mundane online behaviors, such as browsing history or social media activity, can be exploited to manipulate user behavior and political beliefs. This demonstrates how digital literacy must evolve to encompass data Literacy, including an understanding of how personal data is mined and used in ways that we are often unaware of.

Shoshana Zuboff’s “The Age of Surveillance Capitalism” furthers this critique by examining how corporate interests extract behavioral data for profit, violating individual autonomy. Zuboff argues that companies like Google and Facebook have created a new economic logic surveillance capitalism that turns the human experience into data to be harvested, analyzed, and monetized. She states, “surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data.” In other words, everyday actions such as clicking on a link, searching for a product, liking a post, or staying on a page too long is tracked, recorded, and analyzed without consent. This system invades privacy and manipulates user behavior, shaping desires, decisions, and actions in ways that benefit corporate agendas. Zuboff’s work reveals that our digital interactions are constantly shaped and repurposed by economic motivations behind AI surveillance. By showing how personal experiences are commodified in invisible ways, Zuboff argues for greater transparency and oversight of how AI technologies harvest and exploit user data. Her critique reinforces the need for critical AI literacy as a way to recognize and resist these subtle pervasive forms of control. By framing surveillance as a form of economic control, Zuboff exposes how AI technologies are not simply tools for convenience but mechanisms for commodification and manipulation.

To build upon these discussions, Kate Crawford’s Atlas of AI examines AI through a structural lens, connecting AI development to environmental degradation, exploitative labor practices, and global inequalities. Crawford writes, “AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications” (8). This challenges the common perception of AI as a purely digital or immaterial tool, instead framing it as a technology built from physical infrastructures and socio-economic hierarchies. By exposing the hidden costs behind AI–such as the mining of rare earth minerals, the use of precarious labor to label data, and the concentration of AI power in wealthy tech corporations–Crawford pushes readers to reconsider who benefits from AI and who is harmed. Her work underscores that critical AI literacy must include not only understanding how AI functions but also recognizing the broader systems of power and inequality that sustain it. Crawford urges us to view AI as embedded within larger systems of power, labor, and capital — such as lithium mining in Bolivia for data centers, the low paid clickworkers in Kenya and the Philippines who data label, and the dominance of tech giants like Amazon, Google, and Microsoft who are at the center of control over AI infrastructure. These examples challenge the myth of AI as an autonomous or neutral force, showing how AI is entangled with extractive economies and global disparities. Considering these dynamics, it is essential to develop critical AI literacy, which must include an awareness of real-world costs and the power imbalances driving AI innovation.

Another aspect to consider is user demographic. Ruha Benjamin’s Race After Technology brings race to the forefront of AI discourse, arguing that these systems often reinforce systemic bias rather than eliminate it. Benjamin highlights how seemingly neutral technologies can deepen existing racial disparities, noting that “there is an enormous mystique around computer codes, which hides the human biases involved in technical design” (78). Clearly, concerns about AI are not only technical or environmental, but also social. Benjamin’s analysis reveals how algorithms used in policing, hiring, and healthcare often reflect the prejudices in their training data and design. By examining the injustices, Benjamin calls for inclusive and equitable technological development. Her work strengthens the case for critical AI literacy as a tool that can identify and challenge racial biases that are hidden behind claims of objectivity and progress. Timnit Gebru’s work highlights a different demographic issue than Benjamin by criticizing the lack of diversity in AI training development and the harmful consequences of biased training data. Gebru states, “when you ignore the lived experiences of marginalized people, you end up building systems that fail them.” Gebru calls for transparency, accountability, and inclusion and emphasizes that ethical AI cannot be achieved without the active participation of those most impacted by its failures.

Together, these thinkers reveal how AI systems aren't neutral or isolated technologies, but deeply embedded in structures of surveillance, inequality, and bias. Chiang and Tufekci caution users about the role that AI has in spreading misinformation and eroding privacy, while Crawford, Benjamin, Zuboff, and Gebru expose how AI uses exploitative labor and economic manipulation that results in widescale racial injustice and systematic exclusion. Despite approaching AI from different angles, (material, racial, economic, and ethical), they converge in warning that without critical scrutiny, AI will replicate and amplify the very harms it promises to solve. Their work insists that critical AI literacy must go beyond technical know-how to include a deeper understanding of power, justice, and responsibility in the digital age.

What a Proper Digital Literacy Education Must Account For

To address the complexities of AI technology and its ethical issues, digital literacy education must evolve to develop an AI literacy anchored in critical literacy. This must account for:

Multiliteracy Curriculum Design: Integrating critical thinking into digital literacy curricula that address technical skills as well as ethical, social, and political implications.
Equal Access: Ensuring that marginalized communities have the tools, infrastructure, and training needed to thrive in a digital world.
Student Privacy: Teaching students about data security and the ways AI systems utilize personal information.

Developing such a curriculum requires institutional commitment and collaboration across stakeholders. As Benjamin and Gebru suggest, ethical engagement with AI must include diverse perspectives. Without representation, AI will continue to reflect the biases of its creators.

As AI technologies continue to shape our digital landscape, it is essential to foster a deeper understanding of how these technologies work and the ethical, political, and societal implications they carry. This paper has examined the significance of AI literacy in relation to privacy, surveillance, and access, and has highlighted the importance of inclusivity in educational settings. By synthesizing scholarly perspectives from Chiang, Bali, Tufekci, Crawford, Benjamin, Zuboff, and Gebru, it is clear that the future of AI literacy must not only involve learning how to use technology but learning how to critique, question, and reshape it. In an age where AI is ubiquitous, we must learn to engage with it critically, advocate for ethical practices, and challenge the systemic barriers it creates to equitable access.

Bali, Maha. “What I Mean When I Say Critical AI Literacy.” Reflecting Allowed, 1 Apr. 2023, http://blog.mahabali.me/education-technology-2/what-i-mean-when-i-say-critical-ai-literacy/.

Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge and Medford: Polity Press, 2019.

Chiang Ted, “ChatGPT is a Blurry JPEG of the Web: OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?” The New Yorker, 09 Feb. 2023, https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

Crawford, Kate. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021. JSTOR, https://doi.org/10.2307/j.ctv1ghv45t. Accessed 17 Feb. 2025.

Gebru, Timnit. “Race and gender.” The Oxford Handbook of Ethics of AI (2020): 251-269

Tufekci, Zeynep. “Think You’re Discreet Online? Think Again: Thanks to “data inference” technology, companies know more about you than you disclose.” The New York Times, 21 Apr. 2019, https://www.nytimes.com/2019/04/21/opinion/computational-inference.html

Courtney Thurber is a sophomore at the University of Central Florida majoring in Writing and Rhetoric, with a minor in Legal Studies and a certificate in AI, Big Data, and the Human Experience. She plans to attend law school after completing her bachelor’s degree. Her parents have been the greatest inspiration behind her writing, and without their unwavering support, she wouldn’t be the person she is today.