Generative AI (genAI) is good at producing informational content. For example, if a user asks genAI to write a paragraph about habitats shared by white-tailed deer and cottontailed rabbits, it will do fine. However, if a user asks genAI to draft a quarterly report or write a grant proposal, the results will likely require iteration, revision, and changes to tone and voice. This is because genAI lacks contextual awareness.
Generative AI’s Context Gap
Contextual awareness is understanding the implied or explicit circumstances of a situation. For instance, a data analyst who has worked at a company for years understands the company culture and internal politics. When creating a quarterly report, they know how to deliver good and bad news, frame information for the audience, and be selective about the information to include. If genAI were to create the report, it would likely present an objective and accurate summary of quarterly activities but would lack in delivery. A data analyst would know how to “read the room” and deliver information appropriately, whereas genAI could not.
This inability for genAI to have contextual awareness (or any awareness) requires either careful human oversight of genAI tasks that require contextual awareness or for users to limit using genAI for tasks that do not require contextual awareness to succeed. GenAI could be useful for creating a list of topics or helping to organize those topics into a structure. These options leave the final rhetorical decisions to the author, who may opt to follow what genAI suggests, modify or add to the suggestions, or select some and not others. This is a variation of brainstorming, where an author may make a list then hone it to a final set of topics organized to make a point or support an argument.
Unconscious Mimicry
Generative AI lacks consciousness and intent, which can lead to issues such as plagiarism, hallucinations (inaccurate or misleading information), and bias. Since genAI’s training material comes from language models that include sources such as Reddit, Wikipedia, and Twitter, genAI output reflects the inherent biases, attitudes, and perspectives of the human authors who contributed to the source material, but genAI itself has no awareness of these subjectivities; it simply generates responses based on the input it receives, producing text that could be helpful or harmful. Without an author or intent, AI text holds no inherent meaning. Instead, readers will instill meaning and intent, projecting their interpretations onto it. Essentially, AI does not have feelings and is unaware of what is right, wrong, offensive, complimentary, socially acceptable, or socially deviant. This highlights the importance of considering AI’s role in writing and understanding its limitations.
Users who take genAI writing at face value, such as a lawyer who used ChatGPT to write an error-ridden legal brief submitted in court, police officers who use LLMs to write crime reports, or students using AI for tests or coursework, could face consequences for their actions. The lawyer looked incompetent and faced sanctions, officers risk exacerbating systemic racism by allowing inherent biases in LLM output to go unchecked, and students might be depriving themselves of critical learning and intellectual growth, essentially paying for an education they are not receiving.
Writing in Collaboration with AI
Since genAI cannot replicate human judgment, it should be viewed as a tool to enhance, not replace, human writing. Instead of a writing tool, AI can be a planning and organizing tool. There may be other cases of using AI, such as summarizing long documents or rewriting complex ideas into simpler language, but these should be seen as steps toward a larger goal, not the final product. AI can omit key points, misinterpret information, or, by simplifying language, alter the original meaning.
While generative AI offers the potential to enhance productivity, there are limitations. Its lack of contextual awareness, consciousness, and intent means it cannot replace the nuanced judgment that humans bring to complex writing tasks. Instead, AI should be seen as a tool to support writing by generating ideas and organizing documents rather than an end-to-end solution.