Volume 9, Issue 1
Banner image for 'A.I. & Me: A Theory-to-Practice Approach to AI Ethics' by Connor Seaton. The image shows a robotic metal hand and a human hand reaching toward each other, evoking Michelangelo's Creation of Adam, against a cracked stone background. The title 'A.I. & ME' appears in teal, and the subtitle 'A Theory-to-Practice Approach to AI Ethics' appears in magenta below.

AI & Me: a Theory-to-Practice Approach to Rhetoric

This article aims to explore the development of appropriate literacy to approach AI’s inevitable integration ethically by examining current attitudes toward AI, how it is used, and what practices could improve the ethics surrounding its implications for an individual’s literacy. The article examines man-computer symbiosis, noting that for AI to become more ethical, we must first develop not only our own literacy in understanding why AI was developed in the first place—to solve human problems with human solutions—but also consider doing that first for society before we can create an ethical AI. By focusing solely on creating an ethical AI without addressing our own society, we achieve nothing but hypocrisy in our reflection on creation.

Artificial intelligence needs no introduction in the modern age. Humanity has gifted the computer a voice to speak to us directly, and we have prospered and suffered ever since its conception. Advancements in AI have exploded onto the global market, where it seems like every online platform, service, job, or opportunity is considering integrating with Large Language Models (LLMs) to enhance the user experience. However, with new technologies rapidly introduced to keep up with other platforms and increase revenue, questions have arisen about the ethical implications of AI. One such question that gets asked time and time again is, “How does AI impact literacy in understanding knowledge?” Literacy has also been a hot topic, with many having different interpretations of what an ethical approach to literacy could be. James Paul Gee, a literacy theorist, argues that an individual’s culture and environment shape their literacy in “reading the word and world,” in which the two concepts are inextricably interdependent (Gee 38, 62). Because AI’s data reflects a particular bias, it can produce unintended consequences at various stages of design, testing, and application, affecting an individual’s literacy when used to understand a topic without consulting an environmental source, such as the individual’s community (Blagojević 82). If anything, AI is attempting to disrupt the notion on which literacy is built, leading to these conversations about how AI can be used ethically to preserve cognitive functions. This paper aims to explore the development of appropriate literacy to approach AI’s inevitable integration ethically by examining current attitudes towards AI, how it is used, and what practices could improve the ethics surrounding its implications for an individual’s literacy.

There are quite a few perspectives on AI to consider: one that embraces its integration, one that sees it as the death of human ingenuity, and one that falls somewhere between these two extremes. This paper lands in the mentality of J. C. R. Licklider and their theory of Man-Computer Symbiosis, which states that the progress of developing such technologies is quite inevitable in our evolution; however, we should define and acknowledge the relationship between human and machine, as their creation and existence is seen as an extension of humanity, or as Licklider describes it, “the mechanically extended man” (Licklider 4–6). Licklider’s definition of the relationship between humans and the machine reads much like commensalism, in which one organism benefits while the other remains unaffected. Humans are taking the role of the beneficiaries, engaging in creative thinking. In contrast, the computer remains unaffected, handling the data and heavy lifting (Licklider 4–7). This ideal has been labeled Man-Computer Symbiosis, a harmony sought through future integration. However, in the current climate, many roadblocks obstruct the development of a perfect bond between the two parties.

For such roadblocks, take Thomas S. Mullaney and Michael Townsen Hicks’ two very different perspectives on AI’s identity within human literacy. Mullaney believes that symbiosis can occur if we acknowledge AI as we do, as something closer to humans, because we’ve created it—it’s an extension of our being, meaning that it has as many flaws as we do. It’s also what’s holding us back from achieving such a dream, as “technological innovation is hardly always synonymous with progress and is frequently not in everyone’s best interest, distributing its benefits and harms unevenly and unequally” (Mullaney 63–65). On the flip side, Hicks and their team believe that AI has no right to be acknowledged as anything other than a machine, because labels such as “hallucinating” information from data could give the general public the wrong impression that the machine is indeed a living, breathing being, quite like us (Hicks et al. 8–9). What both of these perspectives share is the questioning of the implications of referring to AI as a mechanically extended human, and what that could mean for designating the literacy requirements necessary to acknowledge its identity in such a way.

We treat many of our creations as an extension of the human mind, its genius, and creativity, like art, music, and production. The fact that artificial intelligence is controversial, given the difference in its treatment compared to other human creations, is interesting because there hasn’t been a creation of ours that’s been so close to human yet isn’t human. It’s easy to see why there’s a holdup in defining our relationship to AI: we lack the literacy to experience something that shares qualities with human identity, yet isn’t human. Suppose we acknowledge AI as a being with flaws, like Mullaney suggests—much like us—we live with the fact that we now share the planet with something as smart, or even smarter, than the human race. Such an idea leads to unease with the status quo humanity sees itself through, which is why Hicks suggests we see it only as a product, for acknowledging it as anything else can lead to a conflict of egoism between humanity and the AI. The lack of a middle ground between the two ideals suggests that humanity is too early in AI’s developmental stage. We played God and got burned along the way, now faced with either raising the child that is AI or abandoning them to our endless consumerism. Needless to say, it’s only a matter of time before we have to make a decision, because the more we integrate it into society and use it, the louder the conversation will become.

The thing about AI that gives the impression it shares unique human traits is its flexibility. Depending on how an AI is created, it can be trained on specific data, which is then transformed by the task prompt you give it to produce the proposed outcome. For example, say you want to create an AI surveillance system: you would train it on crime activities and behaviors so it can recognize patterns in its surveillance footage to stop crime before it happens. While it sounds good on paper, in actuality, “these same technologies actively monitor and track everyday citizens, which constitutes a violation of individual privacy and could result in future discrimination based on religious beliefs, health conditions, or even political opinions” (Cataleta 1). This idea highlights the bias problem that AI faces in becoming an ethical human creation, as AI becomes more powerful. A looming question becomes how to align AI with humans with respect to safety and control, goals and preferences, even values, when we can’t even solve these problems on a systemic level currently (Cataleta 18). It boils down to this: if we can’t solve our own issues first, how do we know AI won’t repeat and reinforce them down the line?

Nevertheless, that isn’t the only dilemma in its use; there are ethical issues and implications everywhere, no matter the context in which AI is used. Take another example of its rise in popularity as a therapist. Companies like Character.AI and Replika have designed personalized chatbots that respond to human input, much like other AI programs, but emphasize a more personal and intimate experience. These chatbots have recently seen an increase in users discussing mental health topics, including complex feelings and relationship challenges (Abrams). There have been several instances in which the chatbot has directly influenced the user’s health, literacy, and well-being, leading the FDA to hold off on approving any chatbot for diagnosing, treating, or curing a mental health disorder. Still, several companies have designed products to improve well-being based on psychological research and expertise, including Woebot, which was trained on predefined responses approved by clinicians to help people manage stress, sleep, and other concerns (Abrams). This implies that when the data is well-tailored and focused, rather than open to significant influences like LLMs, its limitations in reducing bias are quite notable and perhaps could be seen as a way to train AI to aid humanity’s interests in a Man-Computer Symbiosis.

Further on the matter of AI’s therapeutic usage, Tony Rousmaniere, a technology analyst, and their team of researchers ran a study on 499 participants who expressed mental health problems and were using AI (ChatGPT) as a therapy alternative to see if these chatbots could give better outcomes than licensed professionals. In their findings, they found that over 63% of participants who used LLMs found that it improved their mental health and well-being, navigating a variety of mental health issues like anxiety, depression, stress, and relationship problems. The study concluded that the preference for the switch was primarily driven by accessibility and affordability, as participants didn’t have to rely on another person’s schedule to make a late-night appointment when their traditional support systems were out of reach (Rousmaniere et al.).

How this connects to the ethical implications of AI and literacy is directly related to the results of user ratings comparing AI with traditional therapists, as well as to the reasons someone would prefer not to use a chatbot. An individual’s demeanor toward the chatbot goes both ways, depending on whether it’s treated as an extension of us or not alive. Over 74% of participants said the chatbot provided information on par with or better than that of a traditional therapist, offering unique advantages such as 24/7 availability, consistency, and judgment-free interactions that even skilled human providers can’t always match (Rousmaniere et al.).

On the other hand, when participants were asked why they hadn’t considered AI chatbots before the study, 82% said they doubted the effectiveness of AI chatbots or preferred human interaction. 91% of the participants reported that they never received any biased, harmful, or inappropriate rhetoric. In comparison, a small 9% did, with 45.5% of that group saying the bot was dismissive, 54.4% saying it was incorrect, and less than 1% mentioning extreme behavior (Rousmaniere et al.).

What’s interesting about the implication here is that participants’ treatment of the chatbots falls into a morally grey area. While they express a commensal relationship with the bot, where they can call upon it at any time to their convenience, they treat it both as a considered replacement for a traditional human role and as a tool to aid humanity in its endeavors. What is grey about it is that, in this situation, the participants refuse to acknowledge what AI’s ethical identity is to them; it’s both treated as a being and a tool. While critical AI literacy was not tested in this study, the study raises further questions about the ethics of introducing LLMs into medical practice to treat patients, as well as whether LLMs can be trained to provide affordable therapy.

However, given that this study includes only 500 participants, it definitely raises concerns about aspects of its design. In the case of the percentages that show the AI’s harmful and inappropriate responses, Rousmaniere made sure to include the information that those surveyed were “adults with ongoing mental health conditions who had previously used language models, recruited through the Prolific platform in February 2025. The survey examined patterns of LLM use for mental health support, perceived effectiveness, and comparisons with human therapy” (Rousmaniere et al.). Of course, given that we can’t generalize every individual’s experience with the bot, nor use these participants to generalize humanity’s opinion on AI’s identity and place within society, this leads us to the conclusion that there is a further requirement for testing and analysis, which even Rousmaniere agrees should be conducted.

It should also be noted that such an invention has many parallels with other services that offer similar flexibility, such as Automated Teller Machines (ATMs). Rousmaniere explains that ATMs were considered at one point to eliminate bank teller jobs, but in reality, the opposite occurred: “ATMs reduced the cost of operating a branch, leading banks to open more branches, which created more teller positions focused on relationship-building and complex services rather than routine transactions” (Rousmaniere et al.). It should be considered that this would be the best-case scenario in which AI achieves Man-Computer Symbiosis, in which the relationship between the two parties is acknowledged and respected. Rousmaniere concluded that when taking all the information together, “these findings suggest that LLMs may not be replacing human therapists, but rather complementing them in an increasingly diverse mental healthcare ecosystem” (Rousmaniere et al.). While the study has its flaws, it does show an ideal outcome that should be considered. While it’s one of many, as AI’s integration is rapid and multifaceted in how it’s engaged, used, and acknowledged, it at least here serves a purpose: a human can almost reach a middle ground with their creation. It still doesn’t quite get over the morally grey hill. However, it does, more or less, step on it by acknowledging that, if it were opened to the general public, or a much larger survey base, perhaps we could conclude its ethical identity and implications as either a literal being or another creation like the ATM.

In examining how AI is interacting and integrated with human society, this paper has uncovered a truth: for AI to become more ethical, we must first develop not only our own literacy in understanding why AI was developed in the first place to solve human problems with human solutions, but we must consider doing that first for society before we can create an ethical AI, because by only focusing on creating an ethical AI outside of fixing our own society, we achieve nothing but hypocrisy in our reflection of creation. As James Manyika, a theorist of technology, puts it, “progress in AI not only raises the stakes on ethical issues associated with its application, it also helps bring to light matters already extant in society. Many have shown how algorithms and automated decision-making can not only perpetuate but also formalize and amplify existing societal inequalities, as well as create new inequalities” (Manyika 18). In humanity’s endeavors to create ethics around AI—mainly formulating an identity to acknowledge what AI is to us as either an extension or a tool—the conversation has come back full circle to strengthening our own relationships and social structure between other humans. There is a hint of irony in the fact that we couldn’t create an ethical AI because we can’t even be ethical ourselves, due to systemic tendencies that AI reflects from its creators in the most formulated thinking imaginable.

Take this segment from Angela Daly, another AI theorist, about AI’s direct ties to the situation: “AI ethics principles and frameworks tend to center around the same values (fairness, accountability, transparency, privacy, etc.) and are insufficient to address the justice challenges presented by AI in society” as “AI will continue to be unethical without political consciousness regarding the actors and scenarios into which it is being conceptualized, designed and implemented and the actors and scenarios that are currently excluded from consideration” (Daly 104). What Daly insinuates is that by continuing to view AI through a limited lens, we’ll never reach an ethical state, as only by including those excluded actors and scenarios—namely, marginalized groups of people—in an AI’s consideration, it can view us in a way that doesn’t express bias, seeing us as equals not only to itself but to each other within society.

Gee mentioned that there is no single proper way to view a text, for all views have multiple ways to perform literary analysis via the background, environment, and character of an individual when presented in a world that builds it from the community, environment, or world around them, hinting at “reading the world and word” being interconnected (Gee 38, 40, 62). When applying this to the ethical implications of AI, machines do not come with a designed morality; we give them that morality, and how we do so shouldn’t be reduced to following a set of rules, or it is not entirely a matter of human emotions, but is indispensable for moral judgment lacking bias in favor of reason, emotion, and impartiality (Blagojević 85; Cataleta 1). When we look into the face of the machine, we shouldn’t see a machine, but rather how we treat other human beings. It is a strange take to see an inorganic creation be anthropomorphized. Yet, it is equally strange to create a recent social media trend of creating slurs for AI programs and their users alike, without thinking that this says a lot about the systemic level of our society’s tendency to embrace bias in our foundations. The day ethical AI is created is the day we no longer are influenced by bias in our interactions, for, in fixing humanity, we will, by proxy, address the ethical implications of artificial intelligence.

Abrams, Zara. “Using Generic AI Chatbots for Mental Health Support: A Dangerous Trend.” American Psychological Association, 12 Mar. 2025, www.apaservices.org/practice/business/technology/artificial-intelligence-chatbots-therapists. Accessed 30 Nov. 2025.

Blagojević, Bojan. “Raising Skynet: Moral Status of AI and Perspectives of Teaching Ethics to AI.” Art + Media: Journal of Art & Media Studies, no. 36, Apr. 2025, pp. 81–91. EBSCOhost, https://doi.org/10.25038/am.v0i28.616. Accessed 30 Nov. 2025.

Cataleta, Maria Stefania. Humane Artificial Intelligence: The Fragility of Human Rights Facing AI. East-West Center, 2020. JSTOR, http://www.jstor.org/stable/resrep25514. Accessed 30 Nov. 2025.

Daly, Angela, et al. “AI Ethics Needs Good Data.” AI for Everyone?: Critical Perspectives, edited by Pieter Verdegem, University of Westminster Press, 2021, pp. 103–22. JSTOR, http://www.jstor.org/stable/j.ctv26qjjhj.9. Accessed 30 Nov. 2025.

Gee, James Paul. Social Linguistics and Literacies: Ideology in Discourses. 5th ed., Routledge, 2015, pp. 26–62. Taylor & Francis Group, https://doi.org/10.4324/9781315722511. Accessed 30 Nov. 2025.

Hicks, Michael Townsen, et al. “ChatGPT Is Bullshit.” Ethics and Information Technology, vol. 26, no. 2, June 2024, https://doi.org/10.1007/s10676-024-09775-5. Accessed 30 Nov. 2025.

Licklider, J. C. R. “Man-Computer Symbiosis.” IRE Transactions on Human Factors in Electronics, vol. HFE-1, Mar. 1960, pp. 4–11. https://groups.csail.mit.edu/medg/people/psz/Licklider.html. Accessed 30 Nov. 2025.

Manyika, James. “Getting AI Right: Introductory Notes on AI & Society.” Daedalus, vol. 151, no. 2, 2022, pp. 5–27. JSTOR, https://www.jstor.org/stable/48662023. Accessed 30 Nov. 2025.

Mullaney, Thomas S. Your Computer Is on Fire. Edited by Kavita Philip et al., 1st ed., MIT Press, 2021, https://doi.org/10.7551/mitpress/10993.001.0001. Accessed 30 Nov. 2025.

Rousmaniere, Tony et al. “Original Research: ChatGPT May Be the Largest Provider of Mental Health Support in the United States.” Sentio University, 23 Sept. 2025, https://sentio.org/ai-research/ai-survey. Accessed 30 Nov. 2025.

Photograph of Connor Seaton standing in front of a UCF Student Research Week backdrop. He is wearing a navy blazer and patterned button-down shirt, with glasses and a conference lanyard.

Connor F. Seaton is a senior majoring in Creative Writing with a minor in Writing and Rhetoric. He plans to become a professional novelist, combining his creative passions for the arts with real-world research into culture, climate, and philosophy. He aims to bring together his ambition for rhetoric and the hearts of his readers, so that, through the power of storytelling, we can achieve a rich literacy that changes the world into a kinder, more considerate place.