By Laura Rees, Associate Professor, College of Business
Editor’s Note: This faculty guest post marks the first anniversary of the public release of ChatGPT. The Center for Teaching and Learning invites first-person posts from OSU faculty about applications of generative AI to teaching and learning.
What do providing source code for how to show videos in class in a particular way and creating marketing slogans for pickles have in common? This is the story of my increasing fascination and use of generative AI (specifically, ChatGPT) in my work and life.
I study ambivalence as one of my main research areas, so I’m quite good at coming up with pros and cons, the good and the bad, the two (or more) sides to almost anything. Before I begin, the usual caveats with AI and ChatGPT apply here—and others have said it better (e.g., Mollick & Mollick, 2023 and Pasquini, 2023, which is a compelling new summary of some particularly striking issues). But I was unexpectedly delighted by my recent experience with several of Coursera’s courses, specifically on prompt engineering and also one on teaching with AI (much shorter and simpler than the prompt engineering and related ones I took). I should also note that because I’m cheap and also didn’t want to feel guilty if I fell behind schedule (which I inevitably did), I took the free versions of all of the handful of courses I’ve done. However, I could easily see how the paid ones could be valuable, as you have access to more detailed assignments and opportunities to practice and get feedback. I also stuck with Vanderbilt’s courses, taught by Jules White—I’m biased, it’s where I did my post-doc—and found them excellent.
All that said, here’s what I have taken away from my experience, and why I’ve been telling everyone who will listen that they need to jump on this LLM train:
- ChatGPT (and by extension I mean all generative AI) requires learning how to communicate in a very specific way. The best way I can think to describe it is like talking with a really precocious toddler. It simply responds based on what it understands (so if you feed it gibberish, it will give you gibberish output—but recognizing that you’ve said gibberish also takes practice). And it only responds logically based on what it’s observed (including all of our human foibles and biases), without any filter. There is some creativity, and trial and error are absolutely vital to communicating well with ChatGPT. But my initial clumsy attempts at really using ChatGPT for more than simple whimsy now seem like child’s play. After understanding and practicing with specific prompt engineering concepts and structures through the Coursera class, I now feel much more confident that I am interacting at least reasonably effectively with ChatGPT. My improved prompts more often than not return seriously useful output, and have helped me enough that I’ll certainly continue to work on my prompt engineering skills going forward. Just think—the entire world (or seemingly so) at your fingertips, you just need to learn some specific ways to communicate with it!
- You can use ChatGPT to help you with ChatGPT. You can teach it how you want to communicate and it can in turn both learn these preferences and even correct you on unintentional errors YOU make (thus, sometimes it can seem more like an annoyingly precocious toddler…but one that will correct you politely so you can’t even be angry).
- You can ask ChatGPT to help you figure out how to ask it for help. That is, it can help you figure out how to ask it what you really want to ask it even if you don’t really know how to ask it. It sounds super meta, and it is. You can even ask ChatGPT to write prompts for itself to solve a problem that you give it, and compare and contrast different prompt outputs that it suggests, based on your criteria (or lack thereof).
- The Vanderbilt courses are very well done in terms of short lectures with examples guiding you through various concepts, which you can pause and/or rewatch as much as you like if you prefer to work along with the professor (as I do). The practice exercises are both varied and interesting, and include a wide array of contexts—hence the request to ask ChatGPT to come up with a list of ten marketing slogans for pickles, per my opening example.
- But how have I used it in real life, both within and beyond class activities? Even if I didn’t want to use the class ideas that I brainstormed with ChatGPT, it sparked my own creativity and ideas building off what ChatGPT suggested (asking it to suggest ideas for Halloween costumes for a nerdy negotiations professor did not go so smoothly…oh well). Building off an earlier point, you can ask ChatGPT to suggest how it can help you with specific requests such as brainstorming, given certain course (or other) constraints, and it can critique ideas you feed it. I have also asked it about research-related questions and found that, as you’ve almost certainly already heard, its responses were best when I already knew a lot about the topic. But the way that the response was generated was still surprisingly thought-provoking and at least as useful as asking a colleague who isn’t familiar with my area (and didn’t require bothering a colleague; ChatGPT doesn’t care how much you pester it).
- Next term I plan to have AI-generated responses to general questions related to that week’s topic in class, and ask students to consider and critique these responses. This has the added benefit of disincentivizing AI “cheating,” while teaching students how to engage with AI more effectively—something they’ll likely have to do more and more over time in their professional and personal lives outside OSU. For research, I’m still expanding my thinking as to how to use ChatGPT in the best way while also being careful of my own and others’ IP rights (at least until more guardrails are developed). But I can say that AI will definitely be a part of my research world going forward, likely in multiple ways, and probably some ways I haven’t even discovered yet.
So, will I keep using and taking active steps to learn more about generative AI tools? Absolutely. First, burying my head in the sand seems pointless. AI is here, and is actually very cool (and dangerous…again, it’s always ambivalent…). And rather than bury one’s head in the sand, isn’t it more fun to think about the sand castles we could build together?
Oh, and my favorite pickle marketing slogan? “Zesty zing, pickle fling!” It’s classic (or should I say, Vlasic?)
Laura Rees is an associate professor in the College of Business at Oregon State University. She received her bachelor’s degree in Economics from Harvard and her Ph.D. in Management and Organizations from the University of Michigan. Laura’s research focuses on emotions, attitudes, and automatic behavior (habits) in the contexts of negotiation, decision accuracy and performance, persuasion and cooperation, and interpersonal perceptions and interactions in the workplace. Her work has appeared in Academy of Management Review, Academy of Management Annals, Journal of Applied Psychology, Journal of Business Ethics, and others. Before academia, Laura was a consultant with The Boston Consulting Group.
Sand castle image by DALL-E 2, Nov. 30, 2023
CTL is offering a Winter ’24 Resilient Teaching Faculty Learning Community. See the call for participation. Apply by Dec. 11, 2023.
Editor’s Note: The opinions expressed in guest posts are solely those of the authors.
Leave a Reply