I don’t know if you’ve heard about it, but there’s this new technology
on the block. It’s making life very difficult for writing teachers.
Worse, it’s posing a wave of ethical considerations, especially for
folks like me who usually tend to embrace new technologies (even in this
blog) and encourage students to do the same. Obviously, it’s AI.
Let me start with some of the bad takes I’ve seen. By far the worst one
is that AI is similar to other technological developments, and that we
should simply respond to AI in the same ways that we have to those other
ones. Here are two things I have not only heard but have consistently
heard from various members of the field of Rhetoric and Writing Studies:
AI is just like the development of word processors, which radically
transformed writing. We will just need to use our solid rhetorical
awareness to teach students how to write in this new context, like
we have before.
AI is just like the Internet, which radically transformed writing.
We will just need to use our solid rhetorical awareness to teach
students how to write in this new context, like we have before.
If you are reading this and noting the fact that word processors and the
Internet have almost nothing to do with each other, reach out because I
will personally give you a high five if we are ever in the same
vicinity. If you also notice that using them as rationalizations for
embracing AI in the classroom is weird, because AI has actually nothing
to do with word processors and, in the context of writing classrooms,
functionally nothing to do with the Internet, you get two high fives. If
you notice that they are literally the exact same formulaic statement
and based on that probably not too useful I can, I don’t know, give you
a dap or something.
Here's a few other takes:
Thinking about students using AI unethically frames students as
cheaters. Ugh. No, it doesn’t. Students, when you ask them, will
tell you that other professors are telling them to use AI for
writing. The difference is that the purpose of those classes isn’t
to develop as a writer. In other words, the learning outcomes of
their courses don’t depend on students NOT using AI tools. Students
are not cheaters; they’re tragically getting acontextual, mixed messages from
across their university.
AI is absolutely terrible for the environment and should be
banned. Okay, this is a great take. But how far are we willing to
go in terms of the environmental impact of teaching technology? Take
a look into the environmental damage and geopolitical violence
involved in the Internet and come back to this post. And also, what
are you going to do when a student uses it anyway after their
professor in another department teaches it as a tool that everyone
needs to incorporate into their workflow?
The good thing is that AI is not good at [insert genre that is
specifically related to the course I teach]. Come on. Just, come
on.
Let’s be honest with ourselves. There is only one justification for
embracing AI in the writing classroom, and it’s the one that writing
teachers don’t want to admit. There is nothing, presently, a teacher
can do about it. Furthermore, the poor ethics and/or inconsistencies of
any response are patently obvious.
Don’t believe me? Well, maybe you’re one of the following people.
I know what student writing looks like and have an always-accurately
calibrated sense of when I’m reading AI-generated text. I have
decided to ban AI in my course. Okay, let’s assume that first
sentence is true. Consider this: You have an international student
with English as a second language. They use AI to translate your
assignments to understand them better, then write their essays in
their first language. Then, they use AI to translate their ideas
back into English. By the end of the process, their writing is going
to look markedly similar to AI-generated text and, in a way, it is.
But all the ideas are their own, they were just originally in a
language that you don’t speak or read. Are you comfortable
removing a tool that helps to level the field for speakers from
diverse language backgrounds? Are you comfortable failing this
student based on your felt sense of AI?
I will simply discuss issues related to AI-generated text with
individual students as they arise. Based on this comment, I know
that you are not an underpaid non-tenure track or adjunct faculty
teaching 100+ students a semester. In other words, one of the vast
majority of people actually teaching first-year writing. So, take a
moment to reflect. How much time is that going to take based on the
ubiquity of AI tools, and is your position in the university system
affecting your belief in the possibility of such a strategy? Hey! It
is.
I will create lesson plans that directly address the ethics of AI
and include clearly articulated policies regarding its use in my
class. First, that is sincerely excellent. You should absolutely
do that. But what happens when someone violates your clearly
articulated policy? What do you do? Walk back to where you started
and think about the question more.
I have simply decided not to care about students using AI in my
classes. Uh… Do you, I guess? You might consider whether the
strategy of just ignoring paradigm shifts in how people think about
writing is a solid look for the field of Rhetoric and Writing Studies.
I’m going use one of those AI checker things. Straight to jail.
Those technologies don’t work.
I really don’t have an answer here, except that I truly feel that there
is no way to respond ethically to AI in the writing class at present.
What’s worse is that this generally seems to be an issue that
universities are leaving up to individual teachers rather than providing
institutional policies and guidelines. My sense is that the field of
Writing Studies is just about to get over the hump of the celebratory
impulse (new writing technologies!) and get down to the real work of
figuring out what to do. Because, currently, I don’t know what to do
about AI, and neither do you.