By MacKenzie Moon Ryan, Professor of Art History

In 2020, I redesigned my courses and assignments to fit the context: COVID and socially distanced learning. I turned away from rote memorization and welcomed higher-level thinking, merging visual analysis with cultural context. I shifted to take-home, open-resource exams with much higher expectations for student engagement.
But what I thought to be an improvement in my design was short-lived. Since November 2022 and the release of ChatGPT, my grading experiences have become progressively more frustrating. My students’ work has become (1) better organized but (2) unnecessarily verbose, with (3) only surface-level engagement and (4) a distinct inability to parse fact from fiction. In short, too many students started outsourcing their brains to AI, trusting in the predictive text (and speedy output) to pass my classes.
As the globalist art historian at Rollins, an Africanist art historian by training, I can tell you the open internet is not a very well-informed place for knowledge on my area of expertise. I know this, but I had to re-learn it by feel in reading my students’ (co?-)authored work. At some point while wading through this increasingly draining experience (it felt like wading through mud, at times), I decided I needed to stop avoiding AI and educate myself on what it can and can’t do. I realized it was time for another redesign, this time geared toward guiding students into using AI responsibly.
I um-ed and ah-ed about the Rollins AI assignment redesign initiative offered in 2024-25. I could never quite bring myself to apply or focus on just one part of my courses—especially with syllabi, Canvas sites, and assignments already in play. But when the Endeavor Center teamed up Instructional Design and Olin Library to offer a 2-day AI Course Redesign Institute (CRI) over the summer, I made the time and space.
Concrete CRI Content—Immediate and Impactful Course Changes
The first thing that was of immediate use was the assigned pre-reading—a short, to the point, thoroughly of the moment (published in 2025!) selection by professional writer and writing teacher, John Warner. More Than Words: How to Think about Writing in the Age of AI guides readers through how large language models (LLMs) are built and how they function. They merely repeat, with statistically likely word chains, from the open text fodder they were “trained on” (I prefer the bodily “fed” instead).
When I learned this, I realized I could share with my students that LLMs will largely repeat the most common words from sources like blogs and websites. AI isn’t trained on peer-reviewed sources, nor do these LLMs have access to materials locked behind paywalls. So initially AI can sound plausible (it has access to many thousands of abstracts), but when asked for any great depth, it fails to deliver in content or length (I asked for 1,500 words; it routinely stopped under 1,000). It functions very much like an overexcited spaniel that desperately wants its owner’s approval, assuring me time and time again that it had delivered on what I asked (even though 900 words does not 1,500 words make).

The first thing I implemented this semester was asking my students to read this one short chapter from Warner’s book. By sharing with them how AI functions and where its limitations lie, I asked students to recognize for themselves how shortsighted it is to entrust their submitted work to AI predictive text. As Brooklynn Lehner, Director of IT Experience at Rollins and one of the AI CRI facilitators, pointed out: you wouldn’t bring a forklift to the gym, so why would you outsource your brain when training it is the whole point? With this new early-semester reading, I asked my students to reconnect with the point of college: to actually engage in the challenge of academic work, to form habits of mind and strengthen one’s intellectual and reasoning ability.
This CRI also showed us a few ways to engage AI tools—from coaching it to respond in different ways (role-playing: pretend you are an undergraduate student or pretend you are a liberal-arts professor) to tasking it with becoming our very own AI agent. The helpful show-then-try format let us play with the tools with friendly assistance. This supportive play model helped me redesign my semester-long assignments not to become “AI-proof,” but to ask students, if desired, to use AI in ways congruent with AI’s abilities.
Remember the old, animated paperclip from early Word? Help was there in small ways, but it didn’t offer to complete the entire task for you. That’s my ideal interaction with AI for students: know what AI is capable of, what it’s good at, and how to call the tool into action in appropriate ways.
Re-centering embodied, local, public-facing work

Finally, the CRI reminded me of the power of embodied experiences. This is work that can’t be replicated by predictive text and requires investing personal time, effort, and (dare I say) ideas into each assignment. Such assignments are more exciting to read (and grade) when they are unique and grow directly from our class experiences.
So even more than in past semesters, projects for my courses will be based on art here at Rollins and, where possible, will culminate in practical, public-facing text: artwork labels, audio-guide recordings, gallery tour outlines, and extemporaneous oral presentations. When students must put their ideas into words and share them publicly, maybe some moral imperative will arrest even those who might otherwise lean too heavily on AI to outsource their work. Through these intentional design choices, I hope to instill that we’re together in this business of crafting ideas and building critical thinking skills and analytical capacity.
Ultimately, the AI Course Redesign Workshop didn’t hand me a tool list; it gave me language, structure, and the courage to align my course with what I value most: genuine thinking, visible process, and work grounded in our shared experience.