by Lucy Littler
Hi, my name is Lucy, and I am not freaking out about generative AI.
I’m a Senior Lecturer in the English Department and the Director of General Education here at Rollins. I teach classes where I ask students to read and write. A LOT. On paper, I should be freaking out, right?!
Trust me, in January 2023, I was. I’m pretty sure I went full on Ancient Mariner a few times, frantically pulling colleagues off the sidewalk to stammer and spit about the death of humanity. If this was you and you’re out there reading this now, truly my bad.
But somewhere around March of last year, I just kinda said…Nope. After the accumulative tolls of COVID, and political polarization, and years navigating academia on the non-tenure track, I just sorta refused to freak out about this, too. Because if I did, I was going to quit.
Instead, I decided to get busy trying to better understand generative AI. I started reading about it, attending conferences and workshops on it, and talking with colleagues here at Rollins and across institution types about it. I listened to the perspectives of fellow faculty across disciplines. I listened to staff and administrators. I listened to friends in professional realms beyond academia. Maybe most importantly, I started playing with generative AI myself to see what it could and couldn’t do.
After all this, I don’t pretend to be an expert. In fact, actual experts like Ethan Mollick, author of Co-Intelligence: Living and Working with AI, argue that even the creators of Large Language Models (what most people are talking about when they say “generative AI”) don’t know the extent of what they can do or exactly how they work, let alone the implications of what they mean across various sectors of society. And yet, as I look back across my last year and half of learning, I do see a few foundational (and dare I say comforting?) trends in this admittedly dizzying (and dare I say exciting?) new landscape:
—Plagiarism is the least interesting aspect of this conversation. Sure, academic integrity, integrity in general, is a part of what we should be discussing, but what I find infinitely more inspiring about generative AI is the opportunity it presents us with to rethink what, how, and why we teach.
—Moreover, policing is an unwinnable, exhausting project. Faculty burnout is REAL. We’re still dealing with the soul-crushing and rippling impacts of trying to do our jobs (let alone live our lives) in the context of COVID. We’re increasingly tasked with justifying why what we do is important to an often skeptical culture-at-large. We’re expected to do more work, much of it invisible, and for less money. The last thing we need is to take on generative AI as something we have to beat. First of all, we can’t. And many of the people I’ve been learning from over the past year argue that we shouldn’t try. Rather, a valuable and reinvigorating paradigm shift away from policing to collaboration, to AI as thought-partner, is a hopeful, empowering trend I’ve discovered from colleagues across campuses and begun to employ in my own work (yes, ChatGPT-4 gave me some helpful feedback on what you’re reading right now, and it helped me create the image embedded in this post).
—Students and faculty aren’t so different. Make no mistake, our students are using generative AI. A LOT. But, like us, they’re a heterogenous group of people. There are students who, like some of us, are excited about AI as a tool that will help human beings do more, better. But there are also students who are worried. Worries I’ve heard include: navigating classes with vastly different approaches to and policies on AI and Academic Integrity; decreasing skills and content knowledge due to an over-reliance on AI; AI literacy as a requirement of employability; ethics, equity, privacy, environmental costs, and the unknown. Trust me, whatever you’re feeling on this, you are not alone. Your students are feeling it, too.
—Wherever you are in this is OK. At the July 2024 UCF conference on Teaching and Learning with AI, a presenter shared her “Journey to AI Acceptance,” starting with first hearing the word “ChatGPT” in December of 2022, and then moving through phases of existential crisis, rooted in fear and frustration, to finding her footing, and even getting excited and inspired about teaching and learning again. I recognized in her story a lot of my own. Others at the conference talked about a less linear experience, often without clear answers regarding what we as educators should do. To whoever needs to hear this right now: Whatever your stages might be, and wherever you are in your process, is OK.
And now it’s a new semester. Where does my generative AI story go from here?
For my students, I will have tailored, transparent generative AI policies on my syllabus and in my assignments. Throughout the semester, I’m hoping to have candid conversations with my students about when and how to use (or not) generative AI in their work. I’m also eager to learn a lot from them about how they’re using generative AI (or not) and why.
For my colleagues teaching in the general education program (rFLA), I’m working with people across campus to co-design and co-facilitate 3 hands-on pedagogy workshops that will address perennial rFLA concerns and goals in the context of generative AI (you can read about them HERE, and please know that all faculty are always welcome to attend, whether or not you’re teaching an rFLA course).
For myself, I’ll probably freak out again at some point, but right now I’m leaning into the curiosity, and the hope, that define the Fall 2024 chapter of my generative AI teaching and learning story.