Featured

Dear Student: Make the right choice, and keep away from ChatGPT*

* and all automated text generators or “help me write”/”magic” pop-ups

I live in Delaware not under a rock, so I’ve been trying to find a way to explain to my students that yes, I know what these so-called “AI” text generators are; yes, I understand that they are tempting; and no, I don’t want my students to use them. We can’t just pretend these things don’t exist, and it will be up to others not us to regulate and restrict them. I also don’t believe in surveilling my students or threatening them with grades (in fact, I’m ungrading). I’m an educator, so I want to educate.

By way of context, because all these decisions are situated, I teach at the university level, mostly courses that lead to a Master’s degree or teaching certification. However, I am also an ESL teacher and the chair of the instructional technology committee for our intensive English program. I don’t speak for that program, but I will be sharing these ideas with my colleagues.

I threw out an early draft of these guidelines on Twitter, and they spread far and wide (25K views and counting!). So here’s a more considered and better cited version.

Continue reading “Dear Student: Make the right choice, and keep away from ChatGPT*”

Today in “AI” exploitation …

The goals of so-called “AI” products and the massive tech corps that power them aren’t exactly subtle. I just can’t imagine the act of willful self-delusion it takes to believe that it’s OK to experiment with or encourage the use of ChatGPT and its ilk in education. This stuff is right there:

Continue reading “Today in “AI” exploitation …”

The Big AI Lie: “Never mind the quality, feel the width”

Last week, I published an editorial in TESOL Connections where I tried to inject some realism into the hype over so-called “generative AI” in language education. Many professional associations, including TESOL, have been promoting blog posts and workshops that encourage members to experiment with and incorporate these “AI” products in their teaching, and while I acknowledge the good intent behind these events, I wanted in my article to clearly set out the risks of these choices. And let’s be clear, the use of “AI” is a choice not an inevitability or requirement.

One type of response to my argument boils down to this: OK, I see what you mean about copyright infringement, invasion of privacy, bias, environmental impact, deskilling of the teaching profession, plagiarism, and misinformation … but look at all the social benefits AI will bring! As the saying goes, never mind the quality, feel the width.

Continue reading “The Big AI Lie: “Never mind the quality, feel the width””

“Certified Human”: the new organic

One of the mitigations against the infiltration of synthetically-generated (“AI”) content is watermarking: the idea that text, images, video, and other content created with/by large-language models and other technology can be indelibly labeled as such. It’s not clear whether this is technically possible at the moment, and I can’t see how it’s desirable for the technology corporations promoting “AI”: after all, part of their hype is that such content is indistinguishable from the real thing, so why would they voluntarily slap a great big warning label on it?

In fact, I suspect the opposite will happen: creators will start putting a “certified-human” label on their work. I certainly plan to make it clear that my books, websites, lessons, syllabi, articles, workshops, and other content were not machine generated. My students, readers, editors, reviewers, and clients have a right to know that I do my own work and decide whether that’s something they value.* Think of it as the new organic.

Continue reading ““Certified Human”: the new organic”

Words matter: An “AI” primer

Technology corporations and their boosters are trying to shape perceptions of their products using obfuscating language. Unfortunately, this is getting picked up in causal use and especially in the media (et tu, BBC Newsround?).

So based on my reading and the tweets (or whatever) of people who know much more than me, here’s a quick list of DOs and DON’Ts:

Continue reading “Words matter: An “AI” primer”

Questions to Ask before Using AI in Education

These are the questions I’m asking myself, and which I’d like other educators, instructional designers, AI enthusiasts, and ed-tech promoters to ask

I can’t open my social media feeds or email these days without seeing another article about teachers using generative AI products (ChatGPT et al.) or an ad for a book promising 100 AI prompts for educators. I’m skeptical and concerned about the implications of using these products as teachers or with students. These are the questions I’m asking myself, and which I’d like other educators, instructional designers, AI enthusiasts, and ed-tech promoters to ask, too:

Continue reading “Questions to Ask before Using AI in Education”

Previously on “Resisting Generative AI”….

Three pieces this week to counter the dangerous, headlong, blindfolded rush into accepting commercial generative AI products as if they were inevitable (they’re not):

  • Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers – no, obviously, ChatGPT doesn’t know if it wrote your students’ papers. It’s not intelligent. It’s a parlor trick that predicts text on the fly. Think chattering monkeys with a short working memory. Hard to imagine cases like this won’t end up in a very interesting court case, but hats off to Rolling Stone for the best attributive phrase of the year: ” a campus rodeo instructor who also teaches agricultural classes.”
  • Moving slowly and fixing things – We should not rush headlong into using generative AI in classrooms – from the always excellent LSE Impact blog. Debunks the “but what about calculators?” false analogy ( “It’s like comparing an abacus and a quantum computer”) and raises ethical questions about bias, plagiarism (OpenAI’s not students’), and equity.
  • Different field, but: TV writer David Simon weighs in on the Writers Guild of America strike – Simon wrote “The Wire”, so maybe he knows a thing or two about his craft, and certainly more than Ari Shapiro, who for some reason decides to play the role of naive AI-bro: SHAPIRO: So would you ever agree to a contract that saw any role for AI at all? SIMON: No. I would not. SHAPIRO: Huh. SIMON: If that’s where this industry is going, it’s going to infantilize itself. We’re all going to be watching stuff we’ve watched before, only worse. (Substitute classroom materials for TV scripts in this exchange and think about whether we want our labor to be devalued and our teaching to stagnate like this.)

Just because a technology exists does not mean (a) we have to use it; (b) we have to use it in education; or (c) it will inevitably become part of daily life. Instead we need to educate ourselves and our students about the risks of this technology and the benefits of doing the actual work.

Genre Explained! The Movie!

OK, it’s not a movie, but it is a webinar introducing Genre Explained: Frequently Asked Questions and Answers About Genre-Based Instruction, by Chris Tardy, Ann Johns, and me, published this year by the University of Michigan Press. We introduce genre and genre-based instruction and describe how you might use the book.

Genre Explained is available in print or as an ebook at the Press website, or on Amazon. For international orders, try your local bookstore, Amazon site, or email esladmin@umich.edu

Why I’m not excited by (or even using) generative AI

Between all the hype and doom-mongering over AI text generators (ChatGPT and their ilk), there is a blunt reality: these products and the profit-seeking corporations that market them are not our friends and they have no place in education at this time.

Continue reading “Why I’m not excited by (or even using) generative AI”