Limed: Teaching with a Twist

Preparing Students to Be Literate and Critical AI Users

Episode Summary

Many educators are currently wrestling with how AI is changing their pedagogy and how it may affect student learning. In this episode, we discuss how one professor is preparing for AI in her media law and general education courses, and the panel provides tips and insights on how AI can be used intentionally and ethically to improve student learning outcomes.

Episode Notes

See our extended episode notes at https://www.centerforengagedlearning.org/preparing-students-to-be-literate-and-critical-ai-users/

Jessica Gisclair, an Associate Professor of Strategic Communications, is wrestling with how artificial intelligence might change her teaching practices. She is optimistic that educators can leverage AI as a tool to enhance learning outcomes and shares some of her ideas and concerns with our podcast. Panelists Derek Bruff, Jill McSweeney, and Gianna Smurro offer some practical and philosophical insight that helps us think about positive use cases for AI in the classroom, the importance of having a plan for AI use, and some of the ethical concerns raised by both instructors and students about how AI and the information we share with it may be used.

This episode was hosted by Matt Wittstein and edited by Jeremiah Timberlake and Matt Wittstein. Limed: Teaching with a Twist is produced in collaboration with Elon University's Center for Engaged Learning.

Episode Transcription

Limed: Teaching with a Twist

Season 2, Episode 1 – Preparing Students to be Literate and Critical AI Users

Matt Wittstein (00:00:13):

You are listening to Limed: Teaching with a Twist, a podcast that plays with pedagogy.

(00:00:26):

Welcome to our second season of Limed: Teaching with a Twist. This episode introduction was drafted by Parrot.AI, and edited for clarity by me, Matt Wittstein. Episode Art was created using Bing Image Creator. In case it wasn't clear, this episode is about artificial intelligence in the classroom.

(00:00:47):

Jessica Gisclair, an associate professor of strategic communication at Elon, believes that artificial intelligence, AI, can be a valuable tool for students in navigating dense material and leveling the playing field. She plans to set a positive tone regarding AI usage, workshop student writing, and emphasize the importance of avoiding plagiarism and cheating.

(00:01:10):

The panel discussion on AI and education with Derek Bruff from the Ole Miss Center for Excellence in Teaching and Learning, Jill McSweeney from Elon University's Center for the Advancement of Teaching and Learning, and Gianna Smurro, a junior communications major and Center for Engaged Learning Student Scholar Elon, focused on successful implementations in the classroom, understanding the future of AI, collaborating with students, and the ethical implications of AI use. Participants shared their experiences with AI and the limitations of AI tools, while also discussing the need to teach students to be critical and literate users of AI.

(00:01:48):

I'm Matt Wittstein. I hope you enjoy this episode.

(00:01:56):

Hi Jessica, welcome to the show. I am so excited to have you here. We're going to be talking about a very popular topic right now, and that is artificial intelligence. But before we get there, I would love for you to introduce yourself to our audience.

Jessica Gisclair (00:02:10):

Thank you, Matt. I appreciate you inviting me to have this conversation with you. I am Jessica Gisclair. I'm an associate professor in strategic communications at Elon University.

Matt Wittstein (00:02:20):

And so tell me a little bit about your journey with artificial intelligence. We're going to talk about how you want to implement it into your class, but how did you get there? What made you realize, not just that it was happening all around us, but that you need to do something in your classes?

Jessica Gisclair (00:02:35):

I wanted to be proactive. Many people around me were talking about AI and they were very worried and very anxious, and I think that anxiety was coming from professors who were concerned that AI was going to assist students in committing plagiarism or other kind of honor code violations. And certainly that's always on our radar, but I didn't necessarily think that AI was the first time this was ever going to be a concern for faculty. So I thought, let me look into what this really is all about. I know students are using AI and I know it can be a positive tool, so how do I approach AI in the classroom and use it for a positive experience with students?

Matt Wittstein (00:03:14):

I love that ethos of let's be proactive and let's make this part of a positive learning experience. So let's get some of that context a little bit. What courses do you teach? Is there a specific course that you're thinking about implementing this in?

Jessica Gisclair (00:03:29):

Courses that I teach are usually to upper level students, so it's third and fourth year students, and these are required courses. One is a media law and ethics course that's required for our journalism students, and the other is a capstone course for the general curriculum requirement. And in both of those classes, students produced a lot of short as well as long form writing projects, and so I knew that there were tools available for them to assist in brainstorming and getting things started with research, but I wanted to make sure they were using those tools appropriately and not overstepping boundaries with some of those tools.

Matt Wittstein (00:04:06):

When you say that these are upper level courses, are they sophomores, juniors, seniors? Are they serving as prerequisites in any way? How does this fit into where they're at in their academic journey?

Jessica Gisclair (00:04:20):

So usually these are juniors and seniors, quite a few seniors actually, and so it's probably their capstone experiences in both of those courses. The media law and ethics is one of their more difficult courses in our curriculum. It's a 400 level course, which is a high level course for us, as well as the capstone course that they take. It's another upper level course, a 400 level course. Both are reading and writing intensive courses, so therefore there's a lot that's happening in those courses at a rapid pace. So yeah, I think for our students here, for my students I should say, I could see where having extra tools to help them get through very dense material would be of great assistance to them.

Matt Wittstein (00:05:00):

It's pretty obvious that you're already thinking about how to use it in that positive way. You just said tools to get through dense readings, for example. And I think when we initially think of AI as educators, we're really worried about the artificial intelligence doing the idea generation for us. So how does this piece fit in, or not fit in, with your class?

Jessica Gisclair (00:05:25):

I've thought a lot about that. So I've done a little bit of research on AI and looked at what other universities are actually saying in terms of statements around AI, what some academic journals are saying about statements around the use of AI, and basically what it comes down to is it's a great space for you to be in a sandbox and play and see what's out there, but it shouldn't be the end result of your work and what becomes your identity in a classroom. So I think, when I look at AI and I think about it, I think about the worrisome piece would be, I don't want students using AI to do the substantive part of their work, but I do want them to use it to get them started, to get them thinking, to get them focused. But eventually my job in the classroom, I think, is to help them then take that information and make it their own.

(00:06:19):

So if I understand how they're using AI, I think I could help them use it to a better benefit to get to the point where I need them to be, which is thinking on their own, but using AI perhaps as that one step up to get them started. And admittedly, Matt, in the courses that I teach, and I think a lot of faculty that teach these courses, it's the first time students are being introduced to some very difficult topic areas. So having tools to help them understand, in addition to me in the classroom, is something that I should embrace. I should give them every opportunity I can to help them get through that material.

Matt Wittstein (00:06:58):

Can you clarify what you mean by difficult topic areas?

Jessica Gisclair (00:07:01):

So for example, in media law and ethics, it's often the first time that students really get exposed to law. Students need to learn how to understand why Supreme Court justices make certain decisions, how those decisions impact society, how they might impact free speech, how they might impact media professionals, et cetera. And in order for them to understand that, I have to walk them through a lot of dense material, and it's often the first time they've been asked to really think about this and realize that this material, what they're learning, will impact their professional careers. Knowing how to avoid certain situations around the law is helpful to a journalist. Knowing what their first amendment rights are is helpful for everyone. So therefore, I spend a lot of time giving them lots of materials to work with, from simple readings to hypotheticals to debate assignments and what have you, to get them to analyze situations. AI can help them get there a little bit quicker and I think we could go a little deeper if we use AI appropriately.

Matt Wittstein (00:08:07):

So your examples seem to really fill in on that media law class, but you also mentioned having a capstone in the university's general education. What's that course and what are those experiences like? Because typically those are not necessarily communication students, they're from the broader university profile.

Jessica Gisclair (00:08:24):

So this course is about crime fiction and people really come into that course thinking we're going to just solve murder mysteries and be all excited about podcasting and what have you, which is part of what we do. But what the course really focuses on is, what's the societal impact? How does crime fiction either reflect society or impose things upon society? Our impressions about people, place and that sort of thing. Part of what I try to do when I work with students is give them the perspective to look at, what's the socioeconomic, political and legal influencers in what we're studying? That happens in law, that happens in crime as well.

Matt Wittstein (00:09:05):

I can genuinely say that your classes sound fascinating and I would love to be a fly on the wall for some of those conversations, but I want to get back to the AI and really thinking about, how are you planning on starting to implement it into those classes? Do you have specific assignments laid out? Are you thinking about syllabus language?

Jessica Gisclair (00:09:25):

So I made an attempt to try to write an AI statement for my class, and I'm not quite sure if I'm comfortable with it yet, but basically what the statement says is, what is AI? Where should students use it within a particular course? How could they use it? Should they be citing it as part of their work? And when should they remove themselves from the use of AI? So I'm trying to figure that out for each of my assignments. But one of the things that I guess I came down to with this is the assignments that my students are producing that I believe AI can help them with is their final projects, their really deep assignments, where they are actually doing a pretty big analysis of a particular theme and they have to present that information to the classroom.

(00:10:10):

So we spend a lot of weeks, like every professor does, with their students, doing research and analyzing and throwing things out and using a new approach and blah, blah, blah, and it takes a long time. I'm hopeful, and what I hope to do, is give them particular points within the assignments where AI should be used, because it's going to get them to the next point a little bit quicker. It'll help them filter information perhaps a little bit quicker. And that's particularly true in courses where I think I have students that come from every major, because when they're coming from every major, I'm often uncertain as to how many research skills they have already acquired, so I'm starting from square one.

Matt Wittstein (00:10:54):

I definitely agree that AI can help students get from point A to point B a little bit quicker, and then point B to point C a little bit quicker. Do you think it'll actually improve the overall final projects?

Jessica Gisclair (00:11:06):

I think it will because I think it will give students confidence. I know that students want to use AI. I've certainly heard them speak about it in class and outside of class. And if I open up the door and say, "Here's the good space in which we can use it in a positive way," I think it'll give them confidence to realize, "Hey, this is a tool that I can use and the professor is okay with me doing that." So in the end, I think their final product is going to be better, if for no other reason is that they have confidence in what they've done and the process that they've been through in order to produce that final product. Additionally, I think that AI, again, will help some of those students get from point A to point B a little bit quicker, and I'm hopeful that means that they'll get into a little bit deeper complicated material and perhaps learn even more because they've been able to move at a more rapid pace.

Matt Wittstein (00:12:01):

I can tell you've put a lot of thought into this. I'm curious, what are your biggest concerns as you're thinking about implementing AI throughout the scaffolding of a project?

Jessica Gisclair (00:12:11):

My biggest concern, to be honest with you, is I don't want to come across as the AI police in the classroom. I don't want to give them the impression that I'm hovering over them and constantly being concerned that they're using AI inappropriately. That hesitation concerns me a lot, but I think, in a sense as a professor, you've just got to go in it and take the risk and say, "I'm going to give this the best shot that I can, and let my students know that I'm by their side, I'm not behind their back hovering." That's not my role in the classroom.

Matt Wittstein (00:12:45):

What questions have you not quite answered for yourself that you're maybe wrestling with how to actually implement into your assignments?

Jessica Gisclair (00:12:52):

I guess the biggest thing that I need help with is to understand, how could I effectively use AI? And I know that's going to vary assignment to assignment, and perhaps class to class because of the student environment and student population in each class. What I really need to understand is how to motivate students to use AI as a tool and not just rely on it as the end result of a project, that the end result should be based on their own thinking and their own application of what they've learned through AI. So I think for me, again, it's the understanding of how to effectively use it.

(00:13:28):

The other thing I guess I am concerned about and really thinking a lot about is keeping the students motivated to use AI, but again, rely on themselves to do the thinking by the end of the project, not let AI do the thinking for them. One of the things that I've read about is how sometimes students really do plug something into AI and it spits it out and they're done. They never go back to it. That's not what I want to have happen in my class. I assume that's what most professors don't want to have happen. We want our students really spending time with that material and getting some key takeaways, things that are going to help them in their future careers, things that are going to help them succeed in the class.

Matt Wittstein (00:14:11):

Have you thought about how to avoid those instances from happening so you don't have to be the AI police and you can just be the AI fairy godmother?

Jessica Gisclair (00:14:21):

That's a great question. I think what I've thought about is this. One, anything new that you're trying in the classroom, you've got to come in owning it and being positive about it. So one of the things I have to do is set the tone about AI and how we're going to use it in class. That it isn't this big bad thing that's going to get them into trouble, it's actually something that's going to help them in the end. So I think I've got to set that tone and I've got to really think about the language I'm going to use to present that to students. The second thing is something I already do, and I think I'll do it more extensively now, which is I workshop a lot of our writing in class. I put students in either teams of just two or teams of three and they work with each other, trying to help craft their writing. And then I of course float around the classroom and do a lot of engaged learning through that.

(00:15:11):

I know a lot of professors use that method. I think I'll probably do more of that, particularly if they're using AI. Yes, that sounds like I'm policing them, but I'm policing them in the way of saying, "Hey, you could think about it this way. AI gave you a step forward. Now what about this other piece that AI didn't tell you about? We've talked about this other piece in class. Let's layer that in." So I think it's going to be a lot more fun for me because it'll be a lot more engagement with the students to make sure they know what they're doing with AI.

(00:15:44):

And I guess the last thing I'd have to say, which is a little bit of policing, is make sure they understand what really is plagiarism and what really is cheating according to our honor code at the university. Making sure they know that those things are always there and will always be a bar or a parameter that I have to keep in the back of my mind as their professor. But I don't want to focus on the negative, I really want to focus on the positive as much as possible.

Matt Wittstein (00:16:11):

Jessica, I can't thank you enough for sharing your context and your questions with us for our panel. I think this is going to be a fantastic conversation and I can't wait to come back to you with what our panel talks about.

Jessica Gisclair (00:16:23):

Thank you, Matt. I really appreciate it. It's been a lot of fun.

Matt Wittstein (00:16:32):

Welcome to the show, panel. I am really excited. I had a cool conversation with Jessica Gisclair about artificial intelligence. But before we get into the conversation, I would love for you all to introduce yourselves, and as you're doing that, I want to know what is the coolest and/or weirdest and/or worst thing that you have ever seen or experienced with artificial intelligence?

Derek Bruff (00:16:56):

I am Derek Bruff. I am a visiting associate director at the Center for Excellence in Teaching and Learning at the University of Mississippi. I've been doing faculty development work for quite some time. I was at Vanderbilt for a long time. Coolest, weirdest, most disturbing. I don't know quite how to categorize it, but just a couple of days ago someone came up with a Johnny Cash cover of that really annoying Barbie song, "I'm a Barbie girl in a Barbie world," but they used AI to simulate his voice and the rhythms of his song, and it's all the Barbie song lyrics and it is awesome and scary all at the same time.

Gianna Smurro (00:17:32):

Hi, my name is Gianna Smurro, and I'm a third year student at Elon University studying journalism and cinema and television arts, with a minor in political science. I'm also a Center for Engaged Learning Student Scholar working with their work integrated learning research seminar. And so the coolest thing that I have experienced with AI actually is practical. I was abroad in the spring semester of 2023 in Italy and I actually used AI in order to help me plan vacations on the weekends to go to other cities with my friends. So it was a great way for me to find ways to save money, while also find great accommodations and activities to do in different cities.

Jill McSweeney (00:18:06):

My name is Jill McSweeney. I'm an assistant director here at the Center for the Advancement of Teaching and Learning at Elon University. And I think mine is similar to Derek's in that there was a song released that was written, produced and generated by AI that sounded like The Weeknd, and I believe Kanye West, and I really enjoyed the song and I thought it was a legitimate song until somebody told me it was AI and then my mind was blown and I was a little bit disturbed at how great it was at making music that sounded like these artists.

Matt Wittstein (00:18:40):

I really like how we have the gamut of entertainment purposes, but also functional purposes. Gianna, your example reminds me of a friend that used it to just figure out which aisle of the grocery store would be the most efficient way to do their shopping from their grocery list, which is a brilliant use of AI. So we're coming from more of an academic perspective, more of a educational setting as we all know. And so I recently interviewed Jessica Gisclair, who's one of our colleagues at Elon, who's an associate professor in our school of communication, and she's just realizing that she's coming into this conversation with not a whole lot of experience or background with AI directly, and she wants to have a better sense of how to use AI in her classroom. She has a media law class that's in her school of communications required program for journalist majors and a few other students as well, and she's also teaching a crime fiction, which is a general education capstone course for juniors and seniors. Both of these are writing intense, reading intense courses, and she's trying to think about ways to use AI.

(00:19:47):

Where I want to start is where she's starting, she's starting this summer working on prepping the course, and she wants to write an artificial intelligence statement for her class, for her syllabus. What would you all put into an AI statement into a syllabus? And from your perspective, Gianna, what would be useful to know about maybe a teacher's expectations with AI?

Gianna Smurro (00:20:09):

Just knowing that all different teachers have different expectations of what they would allow their students to be using AI for, I think just illustrating what different applications AI is acceptable to be used for. As a journalism major myself, I have had some professors say that, especially when writing articles, sometimes it's a good idea to use and lean on ChatGPT or Google Bard to help you formulate captions and things for pictures, but to not entirely rely on those things and to take them with a grain of salt, looking at those things and then reflecting upon them and thinking about how you can improve them. So just, I feel, specifically outlining instances in which it is acceptable to be using it for certain instances of projects and schoolwork would be best.

Derek Bruff (00:20:49):

I love that. I do think having a syllabus statement of some sort is really important. I do think faculty are taking very different approaches to this and students may not know, from class to class, what's okay and what's strictly forbidden under serious pain of penalty. And so transparency is really important here. I think also using it to try to open up a dialogue with your students because whatever syllabus policy you write three weeks before the class starts, the tools and the technologies may change by the time you get to the middle of the semester. And so I think writing something that's flexible enough to accommodate the changing tools and technologies, but also has some space for your students to have a voice in the process, I think is important. The faculty that I talked to in the spring semester, when these tools were really hitting, the ones who were having open dialogue with their students seemed to navigate the transition a lot more easily and more productively than the faculty who came in and said, "This is how it is," and so that's what I would be looking for. Maybe in the syllabus statement, reflect your philosophy, your goals for working with these tools, but work out some of the details with the students because I think that open dialogue is going to be really productive.

Jill McSweeney (00:21:52):

I completely agree with both of you. I think having that transparency and articulating both for the overall course, but also the use may vary between assessments too. So making sure that you have conversations about, within this context, within this process of learning, and how we're going to be assessing for learning, this is how we're going to be potentially partnering with AI. And I think one of the things that I'm doing for the fall is not only having a statement in my syllabus, but talking to students about the difference between user generated and AI generated. And that's a continuum, and you can partner with AI on different levels and really think about why would you partner in this context? What do you do when you partner it? So how might you cite it? Give examples of how you can cite it. But not be so closed off to not allow students to be creative with it because I think one of the important things is that students are going to need these skills to not only be able to utilize AI beyond the course, but also to be really critical and literate users of AI. So how do we help them develop those skills and really know what are the parameters of what AI can do and can't do? And then how do they articulate that in how they're using it in ways that are integrity led versus potentially plagiaristic in nature.

Derek Bruff (00:23:03):

If I'm a student, I may not know I'm using an AI tool. I may open Google Docs and start writing something and there's a button I can click that's going to do something useful and it sounds useful to me. And so what I worry about are students who are panicky over touching any buttons now because they're worried that they might be using some unauthorized aid. And so, again, thinking about individual assignments, tools changing, but also having that open dialogue with students so they can say, "Hey, is this okay? I don't know. Can you help me figure this out?"

Gianna Smurro (00:23:32):

I definitely agree because people are coming from all different backgrounds using AI from not just classroom to classroom, but also with internship experiences. Some of these companies that students are interning at are already using these kind of AI technologies. So I think just having that open conversation, like both of you were saying, about what people's experiences are with AI and also how that can reflect in classroom assignments as well, I feel like that's a really great way to get everyone in the same playing field so that you can have those conversations on what may be allowed and what may not be allowed in AI usage.

Jill McSweeney (00:24:02):

I think that's a really great point, Gianna. I think too, asking students about how they're using AI in their everyday life can really open up how we might see it be integrated into future projects or assessments for learning, but also allows conversations around, "Oh, hey, this is actually AI and you're using it every day." So what is AI and how might it be infiltrating in unknowing ways or daily practices? I think we would also be surprised at how creative students are actually utilizing AI in some of their work. So I think we can use a lot by holding space for these conversations and allow students to be really creative about their integration to their daily practices and the skill building and their assessments.

Derek Bruff (00:24:44):

I was also going to say that's a great point, Gianna, because I think a lot of us in academia are imagining what will be done with these tools in the workforce as they become available, and some of us are reading and talking to colleagues and such, but our own students may have the most up-to-date information about that from their summer internship experience, so I hadn't thought about that. That's really smart.

Jill McSweeney (00:25:05):

I believe it was a Chronicle article that was posted a couple of weeks ago that was saying, along the lines of students might be getting sick of AI because we, the faculty, are thinking that we need to integrate into our teaching so much. So I think that there's also a balance between, how are we integrating? But also we don't need to put all of our eggs in the AI basket. So going back to thinking about how are we building, like you said, Derek, transformative learning and authentic learning? And does that include AI? Is that the best use of how we might integrate AI into our classrooms and is it needed? But I think the approach of not letting it trickle in is likely not the approach that I would take in my own course.

Matt Wittstein (00:25:43):

I want to pull back to something Jill said just a little bit earlier of trying to make students that are critical and literate users of artificial intelligence, and then you had us think about what can it do and what can AI not do. So what are some of the limitations of artificial intelligence and what are some of its strengths?

Derek Bruff (00:26:04):

That's a great question, and I think the answer to that will be different next week. But I've been thinking through what's the right metaphor for particularly the generative text tools that are based on large language models? I think there's a temptation to think of them as answer machines, that you ask it a question and it gives you an answer, but some of them are not good at that actually. I tend to think of them as wordsmiths. They are good at putting words together in sensible ways, in ways that other humans might put those words together. And so asking it to take a rough draft and polish it up, it's going to be pretty good at that because it's good with words. Asking it to tell you the weather in Nashville today or tomorrow, some of them can do that. But what they're doing actually is the tools are going out on the internet, they're finding other data sources that have that weather information, and then they are conveying it to you in words that are easy to read. So it's still doing that wordsmith piece, but it's acting as an interface to something else online. And so I think that's important to know that what the tools are at their core is they're stringing words together in sensible ways. We will see them more and more as interfaces to other systems because we like to use words to figure stuff out.

Gianna Smurro (00:27:12):

So I know that a lot of students are trying to use things like those word generators or sites such as ChatGPT for finding sources for articles and for projects, but I honestly feel that it isn't really the best for doing things such as that because one of the limitations of ChatGPT is that it's not able to actually pull articles and sources and book chapters. So I think a lot of people have this misconception that AI can be used to help find those sources, but I feel some of those different kind of generators are very limited in that sense, and that it can't really give you that information, but it can summarize that information. So I do feel that if, let's say I find an article online that I want to use in my sources, and I'm not entirely sure of what it's trying to explain, I feel that ChatGPT's strength in that source is that it could help me give a summarization and overview so I can contextualize it within my schoolwork and the things that I know and put it into my own thoughts and my own opinions. But I do not think that it is very good at finding those kinds of sources for students to pull from.

Jill McSweeney (00:28:04):

So those are really great overviews of generative AI. My sense of understanding is that, at the moment, it largely is based on probability, which really means that it's not about right or wrong, it's what is the most likely answer or word next? And I think letting students know that it is not something that typically will give you the right answer, it might just give you the most probable answer. And I think a lot of students rely on this as, "Oh, I'm going to put a prompt in and it's going to give me the answer that I need to pass this course or pass this assessment." Gianna, you brought up the aspect around references, and I believe it is getting better, but at the moment it's creating fake references, references that aren't even there. So even getting students to put something in and do up a ChatGPT essay and really go through and fact check, I think that's part of building that literacy that even extends far beyond this topic and just building that really critical user skills that we want.

(00:29:02):

I would also say that tools like ChatGPT can't do things around self-reflection. It can't ask students to necessarily relate to their lived experiences. And I think when we're thinking about really transformative assessments, getting students to really contextualize the learning within the lived experiences that they're having within the course, but also beyond the course, who they are, their identities. I think thinking about how we're building assessments for learning, that feed on these really critical components of meaningful assessment, mean that AI might not necessarily be relevant or really support the learning that's happening in the assessment. So I think that AI really misses that really critical piece when we're thinking about it within the context of the classroom and learning.

Derek Bruff (00:29:45):

I want to add that I think one piece of advice I would give for any faculty member trying to figure out how to use these tools is to use them a little bit and play with them and see what they can do, because not all the tools are made the same. So the free version of ChatGPT will totally make up sources on a regular basis. The paid version of ChatGPT can search the internet and does a much better job of locating actual sources.

(00:30:06):

But there's a tool called Elicit, which is actually not reading the internet, it's reading a set of data from Semantic Scholar, and it's looking at scholarly publications. So you can ask it a question and it will generate a list of scholarly publications that are real that might actually answer that question, and it'll do some wordsmithing, it'll try to pull out the study variables or the effect size or things like that, it'll do some summary work for you. But if you don't know what these tools are based on or what they're searching, you could ask the same question in three different tools and get three completely different answers and not know which of those to rely on. So I think fiddling with the tools, learning about a few that you plan to use in your course are really important, because your students will need your help in figuring out how to use them well too.

Gianna Smurro (00:30:47):

And I think just going off with staff and faculty playing with these kinds of different AI sources, I actually had a conversation with a teacher who was teaching a class online during the summer and had assigned for students to write him a press release. So what he would do with his prompts is he would put them into ChatGPT to see what would be formatted when that would come out, and how some of the students would have almost near identical outputs that they would be turning into his class. So I think that's also a good thing for staff and faculty to be thinking about is what is AI generating for students? I feel like that's a great way to be like, okay, how are they using this? What kind of things is ChatGPT giving to the students? So that they can recognize when students are using these sources in unjust ways.

Derek Bruff (00:31:31):

I think it makes sense at this stage. Pretty much any assignment you have that involves writing, you might just give it to ChatGPT or a couple of these tools as an instructor and see what it does with it. Your students will try that, so you should probably try that too. I've been working with some faculty around assignment makeovers, and I realized that in one of my courses, the questions I assign my students to answer in response to the weekly reading, ChatGPT does a great job of answering every single one of those questions. And so I could still ask the questions, maybe there's some value in having some students go through that process of reading and responding to these questions, but I could also think about other questions I might ask that would accomplish my goals, which is to get students to do the reading and make something of it and bring something interesting to class. Questions where the temptation is maybe a little harder to pursue, to have the robot do it for you. But yeah, again, playing with it and take your own assignments and plug them in and see what happens.

Gianna Smurro (00:32:22):

I would say that one of the ways that one of my professors had personally implemented that, I had actually also taken an online course over the summer, which was a core capstone, it was [inaudible 00:32:32] and cinema, and one of the ways in which the questions will be worded for our post forms is it would ask you to personally relate to things within your own life. So I feel like it establishes that self-reflection because that is something that AI can't do, they can't reflect on your own personal lived experiences and identities. So I really think that's a great way to take certain assignments and have students to have to go back and actually take those principles of... You might be putting part of a question into ChatGPT, and it might be spitting out information, but ChatGPT can't do that reflective component of certain questions.

Jill McSweeney (00:33:02):

I really like that example, Gianna. And I think too, the other limitation at the moment, and this might have changed because things have changed so quickly, is thinking about what information you're using in terms of current events. So there's a limitation of time, and so if you're really thinking about getting students to pull from examples that are happening right now, that does pose a limitation for ChatGPT. So you can think about, Gianna said, using that contextualized personal experience, but also thinking about what are the current events that you could really pull into your course to get students to think and apply and use the content?

Derek Bruff (00:33:36):

There's another case where there's actually an equity issue here. So the free version of ChatGPT was trained on data through 2021 and can't tell you anything about things that have happened since then. It will, to its credit, tell you it doesn't know anything about things since 2021. I asked it about a recent Supreme Court decision from May of 22 and ChatGPT told me, "I can't tell you anything about that." The paid version of ChatGPT though can search the internet. And so when I asked it the same question about a recent Supreme Court case, it went out and found some sources and summarized them for me. So that is something that's changing pretty rapidly, but also it's, if you don't have 20 bucks a month to pay for the paid version of ChatGPT, and a lot of students can't spare 20 bucks a month for this purpose, they might not have access to that.

Matt Wittstein (00:34:20):

I feel like I'm learning so much about how quickly this is changing, that I almost feel like I need to let everyone know the exact date and time that we are recording this, so when this is outdated, and not too long from now, they'll completely understand why. But it's really fascinating. As y'all are generating some ideas for some use cases of AI, you've had some really good examples in there, I want to think through that typical scaffolded project where you have some generate the idea at the intro part of the project, maybe some revise and resubmit in the middle, and then some sort of big output at the end. Are there ways that you see AI being integrated at those maybe three stages?

Jill McSweeney (00:35:06):

I'd like to go back first to talk about really great assessment design and planning. So thinking about scaffolding, providing opportunities for feedback, really thinking about the alignment of the course objective. So I think, thinking about how AI impacts assessment for learning really goes back to thinking about how you're constructing your assessments and really thinking about that course design. I also think that, if we're looking at AI in the context of product, then we're missing the point of thinking about learning as a process, and getting students to really identify how they're partnering with AI in that process, not just for the final product. So that's my educational developer spiel that I thought I had to contextualize everything I say before this.

(00:35:49):

The other thing I would say is when we're thinking about at the very beginning of idea generation and having students perhaps use an outline or get AI to help generate questions, we are not necessarily asking students to enter in their own work in intellectual property or writing into the programs in and of themselves. So I think the other caveat is, what are we asking students to do and how are they interacting with AI throughout the process in a way that, as Derek mentioned, might not be equitable or ethical in and of itself? So I'm going to say I'm withholding some of that conversation to say, okay, what are some really creative ways that we can pull in AI? I think having AI give feedback is a really great opportunity. Having students put in a draft of their paper and asking for feedback, whether it's copy, editing, flow, syntax, just idea generation, I think that's a really great use of a tool, particularly for students who might be English as an additional language, which might offer opportunities for students who might really get to learn about the writing process and their skill level.

Derek Bruff (00:36:55):

I think thinking through the different phases of the assignment makes a lot of sense, the early phase, mid-phase and late phase. I think for the early phase, a tool like Elicit that can find papers for you is something that could be very useful there. ChatGPT and related tools can do a form of brainstorming. Again, it's not going to give you anything particularly interesting because it is the most probable answers, but often you can ask it for ideas, for topics, for questions you might pursue. You might prompt it a little more, ask it for the top five, and then ask it for five more. Or sometimes if you ask it to take on a certain persona, it's really fun to ask it to be Mark Twain because then it has this silly folksy dialect. But sometimes when you ask it to come at it from a particular way, it'll pick different random things, and you'll get slightly different responses that you might not have thought of.

(00:37:42):

But the other thing I would keep in mind is that some students are going to come to that blank paper at the start of a writing project or a big project and get stuck and get frozen, and those are students who might really benefit from some of these tools to get the brainstorming process started. But other students really do a lot of their work during the brainstorming process, and I wouldn't want to short circuit that. And so I don't know that there's one size fits all answer here, but this is, again, something I would talk to my students about. Where is this going to be useful for you and your process? Where do you get stuck? For me, I'm going to come up with 10 ideas first and then I'm going to go to the tool and see if they can come up with an 11th or 12th idea that I hadn't already thought of. And so that's the kind of thing I would keep in mind. Gianna, do you have ideas?

Gianna Smurro (00:38:23):

So I'm thinking, just slightly further along in different projects, many steps, I think taking data or other collected information and using AI as a tool to make different ways to model or display it is a great way to use it further along in the process, because once I were not to be using AI generation for the brainstorming of the processes and I were to go through that on my own, if I were to get to this step where I have all this information, I synthesize it on my own, but I don't know how to actually put it in a way in which it can be dispersed to other people in the most accessible way, I think that using AI is a great way to take that information and display it through different graphics or other different modes of modeling.

Jill McSweeney (00:39:02):

I'll just add on. I think that's the creative aspect of AI is really interesting, so thinking about remixing things, so writing a gothic sonic in the tone of Taylor Swift, or thinking about the generative art modeling that AI can do, I think there's a lot of ways for creative outputs to be engaged in with the AI, and then pairing that with a really nice reflective assignment, asking them how did AI impede the process? How did it support it? What did you learn from the process? So I think that utilizing AI throughout a scaffolded assessment can also be, at the end, reflecting on its integration and for students to say, okay, what were the benefits? Maybe it did really impede the brainstorming, maybe it made it less so that I was creative, and really thinking about how you're getting students to be, again, developing those critical literacy skills of an AI user and how it's actually impacting their work.

(00:39:59):

But I think that there's a lot of really creative things that people are doing out there that model other ways of active and engaged learning that are just using AI as an e-technology. So I'm not sure AI is presenting something new, it's just presenting something different with a new novel tool. But I think that this is something that we have been doing for a long time in our teaching, thinking about creative ways to engage students in meaningful learning throughout the process, and AI is just a new tool for us to do that through.

Derek Bruff (00:40:28):

Jill, what you said about where you started with learning objectives and being really clear about that, I talked with James Lang recently on my podcast about all this, and he used the term unbundling. So there's a whole cluster of skills that might be associated with a particular big assignment in a course. We're often thinking of those as a big thing together, but we might need to unbundle those and look at each one and say, is this something where AI could be really helpful or is this something where I really want students to have other options, or to have the skill themselves, or to know that this is a place where the AI is going to lead them astray so they've got to develop some critical thinking skills right there?

(00:41:02):

Because I think about, it sounds like Jessica likes to have her students do presentations to class, and so I think a lot about the visual design of slides and presentations. And so I'm a big fan of using really interesting almost metaphorical images to communicate your ideas in a slide. You don't want to put a bunch of words on the slide, you want to have an image there to compliment the words that you're sharing. And so for years, I've either gone to find creative commons licensed photographs that I know I can use in my slides, or eventually I did enough of that I wanted to become a photographer myself. So I often use some of my own photos that I've taken in my presentations. But now lately, if I want to represent a certain idea and I can't find or make a photo of that idea, I can go to an AI image generator and ask it to create something, and seven times out of 10 it comes up with something pretty useful if I prompt it enough. But I think of that as a toolkit that I have now for that particular skill of putting visuals in my slides.

(00:41:58):

When I'm teaching, what do I want my students to develop? Do I want them to develop the whole toolkit? Well then maybe we'll have some activities where they get to practice each of those different approaches. Or maybe that's not the point of this assignment and it doesn't really matter how they get the image, as long as it's something that they can legally use, then you're like, fine, just run to AI and get some stuff. But if you know where that skill is in your landscape of your course and what skills your students are going to benefit, think about their professions and their careers, are they going to need to know different ways to find good images? Or is that something that they don't need to know as much about?

Gianna Smurro (00:42:31):

So she's teaching a media law course, so I think it would actually even be beneficial to have a section on AI and how it's used in relation to the [inaudible 00:42:40] doctrine and copywriting and plagiarism and things of that sort, because I feel like it could also be its own lesson and learning about how that can be used and the ways in which they can use it, because the media law course is a way for students... So all communications majors in the Elon School of Communications have to take this media law course regardless of what their major is. So I think it's a great way to show how AI can apply to each of the different majors and field that all of these students will be going into.

Jill McSweeney (00:43:06):

That's an awesome recommendation. And Derek, you mentioned employable skills and it reminded me of something that I saw, well, going through Twitter where all great things are seen, and somebody had mentioned that AI is not going to put us out of work, it's the person who knows how to use AI that will. And I think that's the really important thing is, is how are we positioning AI as a skill and a tool in that toolbox that students can be effective and critical users of it, and what does that look like in our class? And is that what we want to do in our teaching? If that's not aligned to your objectives, then AI, it might be more of a caveat statement along to your institutional honor code that you really want to talk about. Necessarily something that you have to integrate into your courses, but is it there as a way to really talk about this is the importance of developing and utilizing and interacting with it from a student perspective? Otherwise, it's just another bobble bells and whistles thing that we're integrating into our courses that has no meaning and is really not aligned to what we want to create in our classes. So I think asking yourself why you want students to use AI and how it's supporting their learning and align with your outcomes is where you really need to start and think about that.

Derek Bruff (00:44:25):

Something I've been trying to struggle with is that I tend to be, in my job, talking to people who are already experts in their field, faculty, academics, working professionals, who will use an AI tool to maybe save a little time, but they are in a position to critically evaluate the output of that tool and know, oh, I need to change this, or this is made up, or whatever. But I think when it comes to teaching students who are developing that expertise, they're more in the novice end of that continuum, I think sometimes it may be too much.

(00:44:54):

I remember Mark Watkins who teaches writing and rhetoric here at Mississippi, I think this was a tweet or maybe I heard him present this, but he was working with students who were using AI and some student was like, "I don't want to have to fact check a supercomputer that BSs things. That's too much work for me." The AI cannot take responsibility for its bad choices. You, the user of the AI, are going to be the one held accountable for whatever gets out there in the world. And so you've got to know that. But more to the point, as students are developing the critical conceptual skills and the discipline, I think there's going to be times where fiddling with the AI is helpful for that process, and there may be times where it's actually they're not ready to do that yet. They need to get some more basic skill development first.

Matt Wittstein (00:45:36):

I want to shift gears just a little bit and maybe get to more of the philosophical pieces of artificial intelligence and student learning and some of those concepts. Jill, I sense that you want to talk a little bit more about the ethical practice of using AI and how students are actually interacting with it. So I want to give you the space to just share some of your thoughts on how students are sharing information and what that actually means and maybe some cautions you might provide for some instructors as they think about their students using it.

Jill McSweeney (00:46:05):

Yeah, thank you, Matt. So I would just say, in my work that I am looking to do preparation for the fall, both in supporting my own teaching and that of others, it's really the question of how do we ensure that we're using AI equity in our teaching and what does this mean for students' use? So things like ChatGPT involve you putting content in and they use that content to function. And so I think when we're asking students to utilize these tools, it's important to really think about, how are we getting students to put their information, their intellectual property, how is it being used? Where is it being stored? And does that go against any of our own personal ethics, but also the privacy concerns and regulations of our institutions? So I think it's much more complex than just, how are we going to use it. It's also educating ourselves, just like we talk about with our students to be critical users about these tools, utilizing and owning our work, our thinking, our data, what is it being used afterwards for? And then what can we reasonably and ethically ask our students to put into things like ChatGPT?

(00:47:13):

I'm thinking about how ChatGPT or other generative AI can be useful in our own teaching, so things like building rubrics or course syllabi, but also being mindful that if you're using it to say grade or mark or provide feedback, you're not going to want to put students' work into that. So there's issues around privacy. But I just think that we might not be fully thinking and reflecting on that are really important when we ask students to utilize these tools.

Matt Wittstein (00:47:40):

Gianna, I want to just ask, have you thought about what you have actually put into AI before as being your own piece of work and you may be losing some ownership of that in any way? Has that ever crossed your mind before?

Gianna Smurro (00:47:53):

It has, because I've been curious about what is ChatGPT storing of information that I'm putting in there? And just being cautious about that. Because one of my first initial concerns was, what if I'm putting something in here and it's giving someone else that information? Just taking, let's say original work. Because I had a friend who was writing poetry and she would put it into ChatGPT to check that it was like in iambic pentameter correctly, and I'm like, "What happens if ChatGPT is taking your poem now and giving it to someone else?" But I do think from the student perspective side, it is also very similar, I think, to the way that staff and faculty are looking at the way students are using it, because a lot of students' concerns is that their work is getting put into ChatGPT by staff and professors to go grade their work, and whether that is giving them the grades that they may deserve in certain courses on certain assignments. So I think it just goes both ways. And I think this even goes back even further to when we were talking about the statement at the beginning of the class, is just having that mutual understanding, what the professor of, not only what the students will be using different AI generative sources for, but what the professor will be using it for as well.

Jill McSweeney (00:49:02):

Gianna, you have blown my mind. I don't have any illusions or I do not allude to how I will be using this in their course. And I think you hit the nail on the head that if we talk about being explicit of how students can use it in their learning, we also have to think about that. I think that is great. I would also say I have been working with faculty from health sciences and I think your discipline also has important context to think about its use. So I have a friend who teaches in medical school back home, and he's concerned that students are putting in patient information to get a diagnosis or treatment plan. So also having conversations about privacy with the data in relation to if you're in a clinical practice or any other practice that might have really personal and identifiable data, and how are you using that data with ChatGPT or other AI tools and how might that go against some of your professional honor code and codes of conduct? So I think that this is a really nebulous topic, but it offers a lot of really great opportunities for discussion that aren't just about generative AI, but about practice and ethics and integrity.

Matt Wittstein (00:50:11):

I think one of the spots that I always return to in conversations about AI with colleagues is this philosophical question of, where does the student or the user end and the AI begin? And where is our tolerance for what is okay within that sort of relationship? Because in some cases, it might be completely okay to let AI write a difficult to write letter of regret to somebody. But in another case, it's totally not okay to represent the AI's creative work as your own.

Derek Bruff (00:50:48):

So it was just a few months ago that I finally watched movie Her, that movie that came out almost a decade ago now. I think the tagline is basically he falls in love with his digital assistant, his Siri. And I didn't know much about the movie other than that, and I won't get into all of the parts of the movie, but in the first five minutes you learn that this guy's day job is to ghost-write love letters for other human beings, and I did not expect to see that coming. I feel like that's something we would not be comfortable with, that all of the letters I've written to my wife over the last 20 years were written by this guy in a cubicle somewhere. But in that science fiction world, that was cool. So I do think there's a lot to figure out here.

(00:51:23):

I don't know that the science fiction speculators will always get it right, but this is the kind of thing that we're having to figure out and negotiate. I saw a headline about some AI tool that was going to help you write your wedding vows, and everyone was like horror. I'm like, but who writes their own wedding vows? You look at ideas, you get examples, you ask your brother who's good with words to wordsmith yours a little bit. So yeah, this sole authorship of things is a little dodgy to begin with and I'm glad that we're having conversations to think that through a little bit more deeply.

Jill McSweeney (00:51:52):

This is a tough question and I feel like I'm going to give a super unsatisfactory or unsatisfying answer, and I think it depends. I think what is the difference between using ChatGPT to write an essay and then using autocorrect or the tool or function when you're writing a text to help generate the next word? Everything is on a continuum, and I think it depends. Again, I feel like maybe I'm a broken record, but within the context of a class, go back to your course outcomes and what you want students to be able to learn and do at the end of the assessment. Is it to work on writing skills? Well then maybe writing is the really critical thing that you want to focus on and not have a ChatGPT partnership. I think it depends on if you are teaching art or if you're teaching us a class in cell biology.

(00:52:45):

All these things matter in terms of the context of that partnership. And I think one of the important things that we have to do at the very beginning is say, "What does that line for us and what does that look like in our own teaching?" And I think more and more we're going to be pushed to reevaluate that line as things really insidiously integrate into our daily lives that are AI generated. And then what does that mean? And why is it okay in one context but not the other? And how do we justify that in terms of our own teaching philosophy when we approach our course design? So my answer is it depends, and I think it's going to always be it depends because things change so quickly.

Gianna Smurro (00:53:24):

To this question, is that it really does depend on the certain situations on which it's going to be used in a certain context. And so I really think that goes back to the point that we had all brought up earlier about having those open conversations on how it'll be used and being on that equal playing field of understanding of how it will be used in certain situations so that everyone can be on the same playing field, everyone can know how it's being used. Because unless those conversations are had, it leaves it as open game for whatever anybody wants to do.

Jill McSweeney (00:53:51):

And I'll just plug a recent article from the Chronicle of Higher Education by Flower Darby, 4 Steps to Help You Plan for ChatGPT in Your Classroom: Why you should understand how to teach with AI tools - even if you have no plans to actually use them, and I think they are really four great concrete things that you need to think about in terms of planning for your upcoming fall courses. And it doesn't get you to feel like you have to integrate in as an assessment or learning activity, but just at least be aware and ask yourself some of the questions that you asked us today about our own thoughts about it.

Matt Wittstein (00:54:23):

I think we covered a lot of ground. I know I learned a lot in this conversation today. So I want to thank you all so much for having this conversation with us. I'm looking forward to sharing some of your ideas with Jessica.

Gianna Smurro (00:54:34):

Thank you so much.

Derek Bruff (00:54:34):

Thanks for having us on. This was fun.

Jill McSweeney (00:54:38):

It was fun. Thank you

Matt Wittstein (00:54:45):

Jessica, it's great to see you again.

Jessica Gisclair (00:54:48):

Nice to be here, Matt. Thank you.

Matt Wittstein (00:54:51):

I learned a lot from your panel, which included Derek Bruff, who is a visiting associate director at the Ole Miss Center for Excellence in Teaching and Learning. Jill McSweeney, the assistant director of Elon University's Center for the Advancement of Teaching and Learning, and Gianna Smurro, a Center for Engaged Learning Student Scholar studying Journalism and Cinema and TV Arts at Elon University.

(00:55:11):

For me, the most exciting part was how many use cases already exist for working with different types of artificial intelligence in your classroom. So you have a lot of examples as you develop something that works for your courses. The panel agreed with your approach of creating an AI statement for your syllabus as an opportunity to set clear expectations for your students in a transparent way. Derek suggested starting with your philosophy and goals for using, or not using, artificial intelligence, but then partnering with students to really figure out the rest of the details. Gianna made the important point that a lot of students may already be using AI in internships or in other ways and can bring those experiences to that discussion. And Jill hopes instructors like you will be working to make students critical and literate users of artificial intelligence. And I really liked the framing of that as a learning outcome in that way, like my goal is to make you a critical and literate user of AI, so we have to use it.

(00:56:08):

We also talked a bit about acknowledging how you as an instructor might use AI. Could it be used for providing feedback or creating examples? That transparency for your students will probably help them build trust. And Gianna even expressed that she might have a little concern that her faculty are grading her using artificial intelligence.

(00:56:27):

Thinking about how to scaffold AI throughout a course or a project without using your course specifically the panel shared ways that AI might fit what I thought were three stages of a project. In the earlier phases, AI might be a great tool to explore ideas. To pull back on students relying on AI, Derek gave the example of students coming up with several ideas on their own then using AI to generate one or two more ideas. In the middle phase of a project. AI is really a strong tool for improving a rough draft. Grammarly, for example, is really effective at improving grammar and spelling, or even altering the tone of writing. And if those communication skills are part of your learning outcomes, you might have students compare their draft to an AI generated revision and then reflect on the process. So they're not just getting AI to do the work, but they're thinking about what work is AI actually doing, and could I do that, and does it still sound like me? And those types of features that we sometimes want our students to learn.

(00:57:23):

Gianna also said that she found it really helpful to use AI to come up with ways to visualize data or to make ideas more simple and Derek pointed to AI image generators to create compelling images for a presentation that might be free to use. So instead of Google Image searching, have some of these AI image generators create the actual image that you're thinking of in your head. I thought that was a cool use case. These might be some of those fine-tuning tasks before a final presentation closer to the end of a project.

(00:57:52):

It also came up, especially in your media law class, that it would probably be worth developing a formal unit on the topic of artificial intelligence within that industry. One summative point here is that AI helps some students thrive, but other students actually thrive when they do the work themselves, when they have that blank page to be creative, and this points us back to that earlier point of talking to your students about the details and how they want to use AI, and how they don't want to use AI, and every class might be just a little bit different.

(00:58:21):

The ethics of AI came up a lot as well, and I really want to stress to you and our audience that there are some deep ethical questions that we, as a society, haven't really wrestled with yet about using artificial intelligence. There are still questions about the ethics of sharing intellectual property, whether our own or asking our students to share their intellectual property with AI programs. There are some concerns about equity when you consider that the paid versus unpaid versions of programs like ChatGPT have different capabilities and features. Is using client information to identify a treatment or solution to a problem a violation of some sort of privacy law? These are questions that we have to think about really intentionally before we just kind of dive in with use AI for everything.

(00:59:04):

And as you can see, there's a lot of questions still out there. As a reminder though, it is completely okay to reevaluate our lines, our limits, and our expectations multiple times as we learn more and understand better how we and our students are using these technologies. Derek even suggested bureaucracy as a safeguard against some of these worries, that if there's policies in place, then we have guidelines of what's okay to do and what's not okay to do, and bureaucracy can also slow down how we use new technologies.

(00:59:35):

The big point here is that AI came at us really fast and it's continuing to change very fast. So by the time this episode actually releases, it could be a whole different AI technology that we don't even know about yet. So as you hear about some of what our panel discussed, what area do you think will be the most useful for you to focus your efforts in the upcoming semester?

Jessica Gisclair (00:59:56):

Matt, this is really helpful information, and I think for me what resonated was the idea of building trust with my students around the use of AI. As I mentioned before, I don't want to be the AI police, and I keep calling it an AI policy, but I like the idea of actually calling it an AI guideline. Here are some ways in which AI can be used in the classroom that's going to enhance my teaching, enhance students' learning, and here are the areas where I would want you to actually use AI. Here are the areas where you don't want to use AI. This is where you want it to own it. You want it to be your voice and not some generated voice. So I think building that trust about how might we use it, when might we use it, being transparent about its use, but on both sides, student side and faculty side, that was probably where I was more worried than I realized early on. I've done a little bit more research with AI and I recognized that it is being used in every industry in many different capacities. So I have to look at what I'm doing in the classroom and try to help my students figure out, what's the best way to use it to help them in their personal lives and their professional lives? So I have a different lens now, looking at the use of AI.

Matt Wittstein (01:01:09):

Often when we update our courses, either prep for a new course or reprep a course we've taught many times, there's sometimes the big project that you can't get done before the semester starts, and there's the little things that you can get done. What are those little projects that you think you can get done before the next semester starts and what's maybe a bigger project that you're curious about exploring? We won't commit to it just yet.

Jessica Gisclair (01:01:33):

So for me, one of the things I try to do is go back and reevaluate many of the different assignments and assessments that I use in my courses, different ways I'm rolling out content to students. And so at the moment I'm looking at particular teaching modules, assignments, and other things that work as a learning tool and I know it, and trying to investigate can AI help me get to the point more efficiently so that I can maybe go further in the semester? So I'm really spending time with the more difficult content in some of my courses and trying to break it down myself. And then I'd like to take that to the students and say, "You break it down, and see what you can generate through AI that makes this content more digestible for you, something that you can retain, that simplifies it in a language that helps you learn." I think that's where I am at this point. I'm not going to do that with everything. It's impossible. I'm trying to pick particular parts and pieces within my teaching that I feel would be most effective for the immediate semester and then go deeper in further semesters. I almost feel like I'm beta testing AI in my classroom right now.

Matt Wittstein (01:02:45):

Very often on this podcast, we point back to what your learning goals, what your learning outcomes are. So I'm just curious personally, for you and your classes, what are some of the lines and limits that you think you're going to have with using artificial intelligence?

Jessica Gisclair (01:02:59):

One thing I have to be careful about in my class is not to encourage students to use AI for legal interpretation. Case law can be very complicated and it would be very easy to plug it into AI and get something out of it. However, there's lots of legal theories, there's lots of strategies, there's lots of levels that a court will go through before they render a decision. AI isn't going to be able to really understand the social implications of those decisions, how it might impact people's personal lives, how it might change policy within the legal field, and I want to make sure students realize AI can't do that part of the work for them. It may help them understand the legal decision, but it will never help them understand the theory, the philosophy, the policies that went behind making that decision. So that's an area where I need to spend a little more time myself so I can explain to my students the limitations of AI around that component.

Matt Wittstein (01:03:56):

Jessica, thank you so much for joining us. I think we had an awesome conversation, and I hope this will be super useful for all of our listeners and for you as you prepare for the semester.

Jessica Gisclair (01:04:06):

Thank you, Matt. I appreciate it.

Matt Wittstein (01:04:18):

Limed: Teaching with a Twist was created and developed by Matt Wittstein, associate professor of exercise science at Elon University. Dhvani Toprani is Elon University's assistant director of learning design and support and serves as a producer for the show. Jeremiah Timberlake is a class of 2024 computer science and music in the liberal arts double major at Elon University and Summer 2023 intern for Limed. Music for the show was composed and recorded by Kai Mitchell, a class of 2024 music production and recording arts student at Elon University. Limed: Teaching with a Twist is published by and produced in collaboration with the Center for Engaged Learning at Elon University. For more information, including show notes and additional engaged learning resources, visit www.centerforengagedlearning.org.

(01:05:05):

Thank you for listening, and please subscribe, rate, review, and share our show to help us keep it zesty.