Jeanne Beatrix Law believes that “prompting is writing” in the use of generative AI tools. Photo: COURTESY OF JEANNE BEATRIX LAW
“Yet, whether my colleagues like it or not, most college students are using it.” So wrote Jeanne Beatrix Law, a professor of English at Kennesaw State University (KSU) in the United States, in an article on artificial intelligence published by “The Conversation.” In contrast to many scholars focused on the disruptive potential of generative AI output, Law approaches the issue from an “input” perspective, grounded in the principle that “prompting is writing.” Through this lens, she envisions a future of coexistence and mutual adaptation between humans and AI, both in her teaching and research. In a recent interview with CSST, Law shared her views on the future trajectory of AI development and its deep entanglement with the fabric of human society.
Empowering people, not replacing them
CSST: The prompts entered by human users form the basis of AI outputs, while the content generated by AI, in turn, serves as a resource for learning, work, and daily life. How do you and your research team understand the relationship between input and output? And how has this understanding shaped the way you use AI in your teaching and research?
Law: At KSU, we have launched a Graduate Certificate in AI and Writing Technologies, where we train educators and writers to thoughtfully integrate AI into research, teaching, and authorship. All courses in this program are available fully online, making it accessible to working professionals and distance learners alike. We emphasize human-at-the-helm prompting, a model that centers ethical decision-making and rhetorical intentionality in all stages of human-AI collaboration. I believe the future of AI in education lies in empowering individuals—not in replacing them—and that starts with mindful, transparent prompting practices. At KSU, we have tested and deployed these types of models.
Through my Rhetorical Prompting Method (RPM), we teach writers how to treat prompt engineering as an extension of critical thinking and genre awareness. This method is now embedded in university-sponsored Coursera courses and widely adopted in faculty development settings. RPM is a recursive, audience-aware process modeled after best practices in writing pedagogy. It guides students to revise not just their writing but their prompts, helping them generate clearer, more rhetorically effective outputs. Instead of relying on first outputs, students learn to craft purpose-driven prompts, analyze AI responses, and iterate with intention. Thus, they are developing both their writing and critical thinking along the way.
Complementing this, the Ethical Wheel of Prompting provides a visual decision-making tool to assess whether an AI-generated output meets four essential criteria: Usefulness, Relevance, Accuracy, and Harmlessness. This framework ensures that the human remains at the center of authorship and takes full responsibility for the outcome. Whether revising an email, drafting an essay or building a research outline, users are trained to be not just efficient but also ethically mindful writers in AI-supported contexts.
Generative AI may offer drafts or inspiration, but it cannot be considered a co-author. The human must remain in control of the composition process from setting the intention to reviewing the impact. This is why I developed the Ethical Wheel of Prompting, a metacognitive tool that prompts users to ask: “Is this output useful, relevant, accurate, and harmless?” In doing so, it anchors human responsibility in every stage of generative work and ensures that accountability rests with the user, not the tool.
To date, more than 10,000 students have enrolled in my eight Coursera courses on AI writing, prompt engineering, and professional communication. These courses have become a resource not only for individual learners but also for institutions seeking scalable AI training with a rhetorical and ethical foundation.
A process, not a shortcut
CSST: As an academic working in higher education, how do you foresee the development of AI in the years ahead? What kind of future would you hope to witness and help shape?
Law: As AI becomes more deeply embedded in educational practice, I expect to see a shift from generalized debates to more nuanced discussions and case-specific applications of generative AI in classrooms. Productive conversations are moving beyond “should we or shouldn’t we” toward “how, when, and for whom.” This transition opens a path for AI to serve as a rhetorical partner and a writing partner, helping writers clarify ideas, test interpretations, and engage audiences, rather than simply speeding up task completion.
Data from OpenAI (2025) demonstrated that over one-third of US college-aged adults (18–24) use ChatGPT, and over one-quarter of their messages are related to learning, tutoring, and academic support. A survey of over 1,200 students found that 49% used AI to start papers or projects, 48% to summarize long texts, and 45% to brainstorm creative projects. These statistics underscore a critical imperative: We must meet students where they are and equip them with frameworks to use these tools ethically and effectively.
I’ve written extensively about these approaches in public and academic forums. In my article for “The Conversation,” I write specifically about how I teach college students how to write using AI and think critically about it. This article was republished by Phys.org, Scroll India, and nearly two dozen other outlets. That level of syndication shows the worldwide appetite for principled, practical guidance on integrating AI into education.
So, what do I expect? A future in which writing with AI is taught as a process, not a shortcut—and where educators develop fluency in choosing the right tools, for the right writers, at the right moments.
To support this future, my research team at KSU is contributing empirical data. We have an upcoming publication with Computers and Composition Digital Press in which we discuss our large-scale student survey results. Our findings confirm that students at typical public universities in the US are turning to generative AI for writing support, topic exploration, and revision strategies—often before institutions provide formal training. This reinforces the urgency for higher education to offer contextual, use-case-driven AI instruction grounded in writing pedagogy and ethical reflection.
Humanistic frameworks are indispensable
CSST: In your view, how are the development of AI and the digital transformation of education connected to the wider set of global challenges and opportunities confronting humanity?
Law: The digital transformation of education is not happening in a vacuum. It’s interwoven with climate concerns, shifting labor demands, political polarization, and the need for global ethical frameworks. As educators, we are not only deciding whether to use AI, we are modeling how to live with it. In terms of sustainability and environmental impact, I am certainly not an expert. I looked at MIT’s 2025 report on calculating that impact. I do teach how we can offset these impacts, though, through rhetorical prompting as well as choosing lifestyle changes that may decrease our own individual carbon footprints.
This is why I advocate for teaching AI literacy through rhetorical awareness and epistemic humility. My approach centers on understanding how language is generated, why certain outputs appear, and what responsibilities we assume when we use AI tools in classrooms or publications. By focusing on local context—community colleges, research universities, multilingual classrooms—we can tailor AI instruction to diverse student needs without imposing one-size-fits-all approaches.
Moreover, the integration of AI into writing instruction reveals the hidden labor and human authorship behind machine learning systems. We must be vigilant about reinforcing ethical transparency, including acknowledgment of underpaid human labor in data labeling and model training. To navigate this complex terrain, we need not just digital skills but humanistic frameworks.
My academic commentaries explore these complexities, particularly in essays like “Bits on Bots: Continuing the Conversation on Generative AI” and “What Makes a ‘Good Prompt?’ Teaching Writing as Prompt Engineering.” These have been used in faculty workshops across the US, alongside my keynote presentations at various conferences and most recently the Opening Plenary at the 2025 AAC&U’s Forum on Digital Innovations. That plenary was a call to action for K-12 and higher education professionals to engage with generative AI literacy not as a generic tool but as a contextual practice that must be scaled mindfully for different student populations.
Through my Substack, particularly in posts like “Distant Writing,” I continue these conversations with educators around the world. That piece reflects on the shifting experience of authorship and attentiveness in the age of AI, and how generative tools like ChatGPT can become meaningful companions in the writing process when guided by clear rhetorical and ethical frameworks. I advocate for flexible, ethical, and inclusive strategies that acknowledge the reality of AI in students’ lives and equip them to use it critically and responsibly.
Ensuring ‘human at the helm’
CSST: In your opinion, how could fellow scholars in the international academic community be inspired by the practical results you generated and the reflections you voiced?
Law: First, I want to stress that prompt engineering is not a trick but a teachable literacy. In my own courses and teacher training workshops, I often start with this line: “Prompting is writing.” When we treat it as such, we may stop fearing AI as a threat to literacy and start seeing it as a space to practice rhetorical thinking. This is especially important for students with disabilities, multilingual learners, and first-generation college students who often benefit from cognitive offloading, multimodal feedback, and real-time clarification.
Second, I would encourage global institutions to adopt the principle of “Human at the Helm” rhetorical prompting. This is not just a slogan—it’s a framework that foregrounds intentionality, revision, and accountability. As AI tools become more fluent and anthropomorphized, educators must teach students not just to use these tools, but to critically evaluate and refine them. AI can assist, but it should never replace the reflective processes that make writing and thinking human.
This is why the Ethical Wheel of Prompting plays such a vital role in my pedagogy. It ensures that students and educators remain accountable authors of their work. Whether composing a research abstract or generating ideas for a personal narrative, the user is responsible for evaluating the output’s relevance, facticity, and social impact. This model doesn’t just teach students how to write better, it teaches them how to think ethically about the tools they use.
We are not training students to outsource their ideas to machines. We are preparing them to collaborate with powerful tools and to take responsibility for what they produce, publish, and perpetuate. AI literacy, in this light, is not just about text generation, it’s about civic authorship in a digitally mediated world.
Edited by YANG LANLAN