Dr. Jessica Gisclair from Season 2, Episode 1 “Preparing Students to Be Literate and Critical AI Users” rejoins the podcast to share her and her students’ experiences with artificial intelligence in her Media Law course.
See our full episode notes at: https://www.centerforengagedlearning.org/refresh-preparing-students-to-be-literate-and-critical-ai-users/
This month, Matt Wittstein checks in with Dr. Jessica Gisclair from Season 2, Episode 1 “Preparing Students to Be Literate and Critical AI Users.” Jessica is an Associate Professor of Strategic Communication and was wrestling with strategies for preparing her students to use AI in her Media Law course at Elon University. In this episode, we learn about Jessica’s and her students’ experiences in her course, as well as a little about her research on artificial intelligence in legal and political spaces in China.
This episode was hosted by Matt Wittstein, edited by Olivia Taylor, and produced by Matt Wittstein in collaboration with Elon University’s Center for Engaged Learning.
Matt Wittstein (00:11):
You are listening to Limed: Teaching with a Twist, a podcast that plays with pedagogy.
(00:22):
Before we get into this month's refresh episode, I'm going to plea for two things. First, please rate, review, and if you enjoy it, share our show. And second, if you have an interesting idea or challenge in your teaching, email me directly at mwittstein@elon.edu. That's M W I TT S T E I N @ E L O N .edu. We are always looking for guests for the show. Now that the business is out of the way, we're going to jump right into a refresh follow-up with Jessica Gisclair. Jessica joined us in season two, episode one as she was preparing for artificial intelligence in her classroom. As we check in, we learn about how her students used AI and some of her ongoing scholarship in media law. Check out the episode page for show notes or if you need it, a link to the original episode. Enjoy the show. I'm Matt Wittstein.
(01:29):
Hi Jessica. Welcome back to the show. I am so excited to catch back up with you about how you used AI in your classroom.
Jessica Gisclair (01:36):
Hi Matt. Thank you for having me back. I'm interested in talking about this again.
Matt Wittstein (01:40):
I remember when we chatted, one of your big concerns was do you have to be the AI police in your classroom? And I want to know, did you have to be the AI police in your classroom?
Jessica Gisclair (01:51):
I'm happy to say I did not, and I was very happy to see that my students actually policed themselves. They were actually quite critical of AI and they were very careful, even the moments I gave them permission to use it. So I was not the AI police.
Matt Wittstein (02:08):
Can you tell me a little bit more about how they policed themselves?
Jessica Gisclair (02:11):
It was interesting. I was surprised to hear that most of them did not want to use AI even as a starting point with any of the assignments. They felt that it was being dishonest and they didn't trust AI to give them accurate information. So we worked through a few simple exercises so I can make them a little more comfortable to experiment with it. Most of them, though, still said they would prefer not to use AI and just use the methods of research that they were used to.
Matt Wittstein (02:38):
That was actually one of Derek's concerns is that some students would be so nervous about doing something wrong that they would not learn how to use AI the way that we were hoping to create critical and ethical users of artificial intelligence. Were you able to nudge them into using it enough that maybe they're developing some of those critical skills?
Jessica Gisclair (02:59):
I was, I'm very happy to say that I had one particular academic assignment that was very structured, and so that helped them understand how they could use it in the right places when they were doing research. We also took a second assignment, which was just for fun, and we played with ai. They could ask it anything they wanted and they could explore different options with ai and they were fascinated by how quick it could respond and how accurate it could be, which was good because they were really critical of it in the beginning. And then thirdly, I had them use it if they wanted to on a different assignment. Some did, some didn't. So I gave them a little agency and choice in the third place I used it.
Matt Wittstein (03:45):
I like that piece of agency. I find that always to be a good strategy to encourage learning to happen. When you say that they used ai, are you talking explicitly chat GPT or did they play around with any other technologies?
Jessica Gisclair (03:57):
We used chat GPT because most of the students were familiar with it, and so I stuck with what they knew trying to make it kind of a safe space for them since they had a little bit of familiarity.
Matt Wittstein (04:07):
And did any of 'em get the paid version or were they all working off the free version or do you know?
Jessica Gisclair (04:12):
It was the free version for everyone
Matt Wittstein (04:14):
Because we did have some of that conversation of the paid version can do a little bit more than the free version. So what does equitability look like and the accuracy of the paid version might be a little bit different than the free version. I wonder if it leaves space to build on what you started this semester. So I also wanted to ask a little bit, I know Gianna suggested in your media law class that it might be really appropriate to develop an academic unit about AI explicitly. Did you do that or have you thought about how that might fit into a future semester?
Jessica Gisclair (04:44):
I did. So what I did was I spent about a brief period of time, about 30 minutes just exploring AI and what does it mean, particularly chat GPT and what were its capabilities, where were the spaces where it might be used, particularly in the discipline in which we were studying. So we did a nice big conversation around what it is, how do you use it, how does it impact the discipline we're in? And then we walked through an exercise that I borrowed a phrase from colleagues on campus called Deliberative Dialogue. And these dialogues were around particular topics that are troublesome in the field of media law and ethics. And I walked them through the process of thinking about how does this topic impact you? How does it impact your family? What are the ways in which we could tackle this problem? And it was very personal how we tackled these. Then after we got our personal reflections out, I asked them to plug in that problem we were working on into ai, just read what would AI do with this problem? And that was very interesting because what they felt personally around the problem was quite different in some ways from what AI was pulling for them. So it kind of showed them AI can give you a very different perspective. It's not necessarily wrong, but your perspective, your individual perspective is most important in the assignment we're working on.
Matt Wittstein (06:11):
That's a really cool use case. We talk about how AI can't really do that reflection and relating to personal experience. So you sort of flip flopped it of instead of starting with the ai, starting with their own response and then having that comparative basis. I'm curious a little bit at this point, have you gotten any feedback from your students specifically about the AI piece in your course?
Jessica Gisclair (06:34):
I did. It was all anecdotal, but what was interesting is they wished we could have had more time to do more of those exercises in class. We were able to do three exercises that way, and we did that for about 40 minutes. And really I didn't put any more into the course because I didn't know how successful it would be. So I thought that was interesting that they really liked approaching a legal topic or an ethical topic from their perspective, and then challenging AI to see what it might come up with. So I will try to integrate more of that next time.
Matt Wittstein (07:09):
And can you remind me what level students these are? These first year, second year seniors?
Jessica Gisclair (07:15):
Right, so these are third and fourth years, so mostly juniors and seniors.
Matt Wittstein (07:19):
That's cool. They should be at that point where they're starting to integrate some of the foundational things that they've learned within their majors and minors, and they're starting to think about how to actually apply it. So you're sort of hitting this AI piece somewhat early on for them that they may not get exposed to that. So that's really neat. I know you had written an AI policy or guidelines when you were getting ready for the class last time that we talked. Did you adapt to that or do you intend to make any changes to that based on your experience this past year?
Jessica Gisclair (07:50):
I did use the policy I had written, and the students actually were thankful to have a policy. One of the things I heard from them, and this was in the fall, that lots of faculty had not really thought about AI usage in the classroom. They just said, don't use it is though it was a danger zone. And I said, we can use it, but here are the places and spaces and here are the things we'll be careful with using ai. And they appreciated the guidance. So I'm pleased with the policy I've written. What I want to do now is go back and look at my policy against what the university has been developing over the last year to make sure I'm mirroring the same kind of components.
Matt Wittstein (08:29):
So I also think about how much AI has changed in the not even a year since we chatted last. How do you see it continuing to change what you're doing in the classroom and what students are able to accomplish within a classroom setting? I know one of your goals and hopes for AI was that it would allow you to sort of distill very difficult complex information and speed up that sort of early learning process.
Jessica Gisclair (08:55):
Yeah, so I agree with you there. So one of the things I learned was now that if I approach this assignment, number one, dealing with the fears about AI and the concerns about it, maybe assignment number two, we can go a bit deeper and a bit quicker. And one of the ways that I'm hoping to do that is to allow students to actually develop some of their debate topics with the help of AI and get them to think a little deeper. Students are introduced to this class without having a deep history of law, AI could get them there a little quicker, and now I feel better prepared to do that. So we can take what AI gives us and go deeper intellectually, individually thinking it through, not using AI to get to that level. So giving us the baseline with AI and then taking it deeper with other types of research. That's my goal.
Matt Wittstein (09:46):
Something just popped in my head as you were sharing that and thinking about the different ways faculty might use artificial intelligence. And I'm just wondering if you know anyone of your colleagues here at Elon or in the field that are using AI in really incredible cool ways in their teaching and learning that you're like, I want to do that.
Jessica Gisclair (10:05):
They may be, but no one has shared it with me. They're keeping it as a top secret kind of thing. But I do know professionals who are using it, and they're mostly right now being very cautious with it, and they're using it to clean up work, not really create it. The other piece I'll say that I hear much, much more about because of my legal background or professionals who are concerned about losing their copyright to ai. And so one of the pieces I want to develop in my media law class is let's have a big discussion around copyright and AI use and delve into that a little bit more specifically. Just briefly, I'll say that I know that the US government has looked at ways to manage the concerns around AI and copyright, but we haven't gotten very far. It's not out of the committee discussion stage yet.
Matt Wittstein (10:54):
And I think we had a brief conversation last time of we just haven't had the time to wrestle with some of the moral ethical types of dilemmas that a quickly evolving technology is sort of presenting to us as society. So it's good to see that you are thinking about that and that our political system is considering that in some ways, although obviously there's probably going to be two sides to all of those types of things. But it's good that people are starting to have those conversations of how do ethics intersect with technology in this way. So I want to just wrap up with, I know you do some research work on AI specifically, and I just wanted to ask if you wanted to share anything that you're currently getting into that you're excited about for the upcoming year?
Jessica Gisclair (11:37):
Absolutely. So I do some research in ai and I specifically look at what's happening with AI use in China and the research that I've conducted so far I think is rather fascinating. I look at the legal and political space in which AI is being developed and also the use and implications upon citizens. So what government thinks is great because AI can help government be very efficient. It may not necessarily provide benefit to citizens. Citizens often feel absolutely exposed in AI with very little protection of their privacy rights. So AI has a great space for government, but not always a great space for citizens. And generally what I have discovered is most citizens are very aware of AI and very much want to protect their personal information. It's just difficult for them to do so the way the government is implementing it.
Matt Wittstein (12:39):
Well, Jessica, thank you so much for letting us have a little refresh episode with you. I'm glad that you had what sounds like a very positive experience with AI in your classroom, and I hope you'll continue to share out what works, what doesn't work, and we can learn more from and with you.
Jessica Gisclair (12:55):
Absolutely. Matt, thank you so much for having me back.
Matt Wittstein (13:17):
Limed Teaching with a Twist was created and developed by Matt Wittstein, associate Professor of Exercise Science at Elon University. Dhvani Toprani co-produces the show and is Elon University's assistant Director of Learning design and support. Olivia Taylor is a class of 2026 music production and recording arts major at Elon University and Summer 2024 intern for Limed Teaching with a Twist. Original music for the show was composed and recorded by Kai Mitchell, an Elon University alumni. Limed Teaching with a Twist is published by and produced in collaboration with the Center for Engaged Learning at Elon University. For more information including show notes and additional engaged learning resources, visit www.centerforengagedlearning.org. Thank you for listening and please subscribe, rate, review, and share our show to help us keep it zesty.