Unpacking Education & Tech Talk For Teachers
Unpacking Education & Tech Talk For Teachers
Learning Evolution: The New Era of AI in the Classroom, with Carl Hooker
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
When teachers are first exposed to generative artificial intelligence tools like ChatGPT, it’s easy for them to identify potential concerns. Will the AI do all the work for students? Will students use it to cheat? Will students lose the ability to be creative and think critically because the AI can do those things for them?
Our guest, Carl Hooker, believes we can strike the proper balance and keep the human at the heart of learning, even with AI in the equation. He says, “I like the idea of keeping the human in the center. AI is the assistant. AI is the feedback. AI is the tutor, the thought partner, but it’s not the learner. The learner is still the most important part.” He adds, “Human compassion will never surpass artificial intelligence.”
Join us as we talk to Carl about his new book, Learning Evolution: The New Era of AI in the Classroom, and explore practical ways that students, teachers, and systems can adapt to the changes brought to the learning process by AI.
Visit AVID Open Access to learn more.
#264 Learning Evolution: The New Era of AI in the Classroom, with Carl Hooker
42 min
AVID Open Access
Keywords
ai, teachers, book, give, write, learning, students, tool, carl, concern, cheating, generate, bias, talking, winston, classroom, teaching, education, rena, turn
Speakers
Carl (70%), Paul (11%), Winston (9%), Rena (9%), Student (0%)
Carl Hooker 0:00
I want to make sure we're still learning the foundational skills and not just letting an AI do everything for us—what we need to change in education and why we need to change it, and why I think AI might be finally the thing—oh my gosh, this is gonna save me so much time.
Paul Beckermann 0:13
The topic of today's podcast is learning evolution, the new era of AI in the classroom with Carl Hooker. Unpacking Education is brought to you by avid.org. AVID believes that we can raise the bar for education. To learn more about AVID, visit their website at avid.org.
Rena Clark 0:33
Welcome to Unpacking Education, the Podcast where we explore current issues and best practices in education. I'm Rena Clark.
Paul Beckermann 0:44
I'm Paul Beckermann.
Winston Benjamin 0:46
And I'm Winston Benjamin. We are educators.
Paul Beckermann 0:50
And we're here to share insights and actionable strategies.
Student 0:54
Education is our passport to the future.
Paul Beckermann 1:00
Our quote for today is from our guests new book, Learning Evolution, the New Era of AI in the Classroom. In the book, our guest, Carl Hooker writes, AI will not replace a teacher. However, a teacher that utilizes AI correctly in their daily work will be a much more effective teacher, which means students will become much more effective learners. All right, Winston, what's that making you think about?
Winston Benjamin 1:26
So, for me, I hear a bit of this idea of efficiency, right? Where, like, it's about maximizing productivity with minimizing waste and effort. And I think a lot of time teachers are doing so much work that they are unable to see or not waste, personal time, physical, and mental energy. So I think having the opportunity to be more efficient and effective teachers is a really important thing for teachers to realize how they can use a tool to maximize their efforts to support students.
Paul Beckermann 2:02
Yeah, efficiency is always good, right?
Rena Clark 2:05
Yeah, not feel guilty about it. I have some of that, but it's interesting. Somebody was saying, "Oh, that's cheating, how you're doing it." I'm like, "So." You're not cheating. It's being more efficient. So you're just jealous because I have more time to go home and do other things. But it's just interesting, something new attitudes shift. But I was thinking about the part was talking about replacing teachers or that human aspect. And there's so many aspects of teaching that I just don't think can ever be replaced. And I think that's where that nervousness comes. And we talk about it all the time on here. But it's that building relationships, that human relationship piece that's never going to be replaced, the connections, how teachers and educators can inspire, show empathy. So however, as you said, Winston, AI can increase teachers ability to... I think they can provide more personalized learning, differentiation. And they can do that in a more streamlined, efficient way. So that they actually have more time and energy to build relationships with students. So they have more one-on-one face time, rather than doing the busy work.
Paul Beckermann 3:19
So it's not really cheating. It's just being smarter about it.
Rena Clark 3:22
Yeah.
Winston Benjamin 3:22
Work smarter, not harder.
Paul Beckermann 3:24
Using your tools, right? All right. Well, we'd like to welcome our guest Carl Hooker to the show. Carl has been an educator for over 25 years. He has held a variety of positions in multiple districts, including first grade teacher and director of innovation and digital learning. Among other accomplishments, Carl's written eight books as a keynote speaker, a consultant, and a podcast host. So welcome, Carl.
Carl Hooker 3:47
Hey, thank you for having me. It's good to be on this side of the microphone, so to speak. I'm always the host. So as Rena was talking and Winston was talking, I was like, I wanted to jump in with follow-up questions. I'm like, "Wait, stay back. I'm the guest." But thank you all for having me. Appreciate it.
Paul Beckermann 4:01
You bet and congrats on the new book.
Carl Hooker 4:04
Thank you. Yeah, you know, after so many times writing books, it's been interesting to see. Because, of course, a book about AI—the first question I get usually is, "Did you let AI write the whole book for you?" I'm like, "Have you seen how I write? No, not at all." That being said, I will say it is a great thought partner. I've heard a teacher refer to it as that. I think that's a great way of saying it. So in previous books that I've written, whenever I've had a creative block, I'll take a week or 2 off and just separate myself from it. In this case, because I knew that the topic was ever-present, I would use AI to help me push through the block. And I think that was extremely helpful. When I get to a stuck category, I'm like, "How about brainstorming some ideas for what social studies teachers would use in high school with an image generator. And it would give me a few and I'm like, "Well, those are kind of generic, but I can now take a couple of these and then apply my own spin to it, that human angle." And so yeah, it was it was a fun book to write. And I'm updating it already, of course. But it's not really about tools. You see the ChatGPT for teachers books, and those are great. This is more of a higher-level view about what we need to change in education and why we need to change it. And why I think AI might finally be the thing that gets to what Rena was just talking about—personalized learning, the thing we've been talking about for several decades. Maybe this will be the thing that does it. I don't know.
Paul Beckermann 5:30
For sure. I'm kind of curious, you know, as you did your research for the book, and were out and about talking to teachers, what was their reaction to AI in the classroom? What were they seeing as opportunities or concerns? Or maybe a little of both? What were you hearing out there?
Carl Hooker 5:46
I started the research on this last January, which was about a month and a half after ChatGPT came out. And we're recording this in January of 2024. And what was interesting was, as I approached teachers in the spring, there was a lot of fear, concern, trepidation. Then they started to embrace it a little bit. By August, I saw the biggest concern being academic integrity and cheating, which Winston mentioned too. So what's happening now is there's still that concern, but a new concern has risen over the last couple of months. And it is the concern that students are going to be overly reliant, or humans will be. And therefore we're going to lose our creative thinking and our critical thinking. There's a counter to that, too, you know, because AI isn't doing it for us. We still are the ones that are prompting it, as of this moment. I mean, in 20 years, it may be doing a lot more of it for us—or even in 10 years—but right now, it still takes our creative and critical thinking skills to make it generate what we want it to generate. I'll be honest: As much as I love AI, I do also have those valid concerns. I want to make sure that we're not getting lazy as learners. And that's something that makes me nervous. I want to make sure we're still learning the foundational skills and not just letting AI do everything for us.
Winston Benjamin 7:00
Go ahead, Paul.
Paul Beckermann 7:02
Yeah, I totally get it. I've been kind of in the process of trying to write my parents' history. So I've been interviewing them and then trying to write that up into a book. It's so tempting just to dump all my notes in there and see what ChatGPT will do. And I actually did that a little bit last night. I did not like it. I scrapped it. It was so verbose, and it was making stuff up. And I was like, "That's not my parents." So I went back to writing it myself. But it did help me generate the questions to get started, and it helped me do some of those things. So it was a co-author for me or a thought partner.
Carl Hooker 7:34
Yeah, my dad who I dedicated the book to—I didn't do just a one-word, dedication. It was three pages, because he passed in April. And he's always been my editor for my other books. And so I wrote a big series about him and how AI played a role to extend his life with some of the technology he had in his body to keep his heart pumping. But what was interesting was when he passed, my mom says, "I want you to do the eulogy. You're a speaker." And I'll be darned if I sat there and stared at a blank screen for 2 weeks. I couldn't put a word to paper. And what did I do? And I felt guilty originally saying this, and now I don't feel so guilty. Kind of like what Rena was saying about the guilt of using it, right? But I did use AI to start. It didn't know anything about him, but I said, "Talk to me about someone who served in the military, who went to Vietnam, who served his country, who did his civil duty." Because he worked for the Federal Bureau of Prisons. And also was very generous. I gave all the general features, and it gave me a skeleton, which then I filled in with story. But I would say 20% of it was AI driven. And when I got done, everyone was like, "Wow, that was a great eulogy." And I was like, "Well, I got an assist." And again, I don't know why, but we still feel a little guilt. And maybe, Rena, in a couple years, we won't feel that. Google was the same way. Right? Where we Google searched that to figure out what the answer was. That was 20 years ago, but maybe we'll get over that.
Rena Clark 8:55
Yeah, I hope so.
Winston Benjamin 8:57
Thank you for sharing that story. Thank you for sharing that story about your pops. I'm sorry about his passing. But I got a quick question to help. Because I know a lot of directors and people who are making decisions are worried about how and what and why and are trying to make better policies around AI. As a former director of innovation and digital learning, what gets you excited about AI? And what gives you pause? Just to help put a framework around AI.
Carl Hooker 9:29
Sure. And I think that's a great question. Because, you know, people in roles like the role I've had for several years—in 2011, we rolled out the first one-to-one iPad program in the state of Texas. So I was basically an App Manager. And I feel like this is deja vu all over again. There's an app for that, there's an app for that; there's an AI for that, there's an AI for that. And so what I did then is the same thing I'm doing now, which is looking at, first of all, what the tool provides. What does it give us that actually helps with learning? Secondly, and probably actually firstly, what is it doing with our data? What is it doing with student data? We see an influx in AI apps. And don't get me wrong. If I looked at my monthly bill right now, it's almost like streaming services. I bought so many AI apps lately that every month I keep renewing, because I'm trying them all out. But I also make sure I vet them before I hand them over to teachers and show it to them. So looking at those apps and vetting them for data privacy is something that I would be concerned with. But what gets me excited is, again, the idea of efficiency. It's what it could possibly bring. And when I show it to teachers, I love that kind of feeling, that a-ha! moment that happens. And it always happens at every training that I do, where someone goes, "Oh, my gosh. This is going to save me so much time." Or, "Oh, my gosh. This is what I've been needing for a long time." That one tool, whether it's MagicSchool, or even ChatGPT, to help them just come up with an idea or a framework around a weightlifting program that they're doing with their freshmen. You know, stuff like that. It's giving them that idea and that thought partnership that they didn't have before. So that makes me excited, because I do think teachers are under a bit of a time famine. We have been for several centuries, as we kind of continue to cram more standards and more responsibilities on them and not give them more pay. We needed something like this to kind of help us get through that part of that. So we can say, "Okay, now we can actually go back to the human side of teaching. We can take away all the administrivia," as I call it, "and start actually working on the human part of teaching."
Rena Clark 11:21
Yeah, I see that as a huge benefit, moving forward. And I want to go back to what we talked about—both you and Paul and even myself—how we've used AI. But I think about my own kids. I have some fourth graders and a sixth grader. And so they're a little bit older. But then we have these kindergarteners. If AI is all they've known, they're in a world where AI has always existed. I still have concerns about building, as you said, that creativity. How do we build those critical thinking skills alongside AI? What are some of those best practices that we suggest for schools or teachers to use to really integrate AI responsibly?
Carl Hooker 12:06
Well the good thing right now is elementary, in theory, because we know no kid has ever done anything that says 13+ or above if they're under 13. Right, kids out there listening? So the good thing is that teachers are still very much the gatekeepers at an elementary level, which is where you're building a lot of those foundational skills. That said, I think it's important for teachers, especially probably third, fourth, fifth, to start some early modeling. I did this with my daughter's class. I do a lot of guest teaching, just to kind of get myself in the classroom. And I brought in AI for a fourth grade lesson. I had it basically tell a story. I had them all write a story out of something they had done over the weekend. And then I had AI write a story about something I had done over the weekend. And I said, "Well, let's see how much of this is actually true." And we went through and talked like, "I didn't do this; I didn't do that. It's not even close." And then I said, "Let's edit it. Let's change it. Let's add some spice to it." Because you know, AI produces what it thinks is the most probable and plausible answer. So it's never going to give you that really humanistic, exciting kind of spin to it. And so that was kind of fun for them. So modeling that at an early age, I think is important. A good friend of mine, AJ Juliani, has a kind of graphic that he shares that I shared in the book, and it's: Red light means this is a non-AI lesson. It's gonna be a foundational skill that I need you guys to know. Yellow means we're going to use AI for components of it; there's some foundational stuff, but you're also gonna be able to use AI for certain parts, and we're gonna identify those. And then green is, I don't care if you use AI or not, because at the end, you need to demonstrate that you understand whatever it is that we taught you. So yes, go ahead and use as much of it as you want. However, at the end, you're gonna have to be able to do it backwards and forwards, probably through some sort of verbal representation, through a presentation of some sort. So I think that having those kind of graphics, much like we did with cell phones 15 years ago: Put the phones away; you can take them out. And we may go through that again with AI to be honest, because with cell phones, for a while, we banned them. And then we said "No, use them for Kahoot! and use them for everything else." And now what are we doing? Oh, I see them. They're back in pockets again, because we're worried about the distraction of it. Now this is different, because it's not a tool that's trying to get their attention right now like a phone is, but it'd be interesting to see where we go through that permutation of AI. So right now, I don't know about policy, because I think policy takes months and board action and is probably already behind when it's published. But I do think guidelines and guardrails and best practices—having those in your classroom. I think that's smart in terms of just identifying to kids, "Hey, this is okay or this isn't." And I'll go and tell a quick story. She would get mad at me so I won't tell her name. I have a daughter. I have three of them. My oldest one was a freshman. So now I've identified her but haven't said her name. And if her teacher hears this, I'm sorry. Last month she was struggling with an ELA paper and she was like, "I need to write this paper. It's ethos and pathos. It's four paragraphs. It's due at midnight. It's nine o'clock at night, Dad, and I'm struggling." So I was like, "What does your teacher say about using AI?" She was like, "She doesn't say anything," because they haven't been trained on it yet. By the way, I'm going there in 2 weeks to train their entire staff. So I said, "Try ChatGPT. See what it can get you started. And then I want you to put it into Google Docs, I want you to look at the revision history and see how much of it you just copied and pasted. Because the first thing as a teacher I would do is say, "Wow, you just took a chunk of text and pasted it." And now I want you to clean it up and make it your voice. Now, I'm going to take the device from you. And I want to quiz you on it, see how much you know, verbally. And she was able to respond really well with it, and knew it backwards and forwards. And I said "Okay, now go ahead and turn it in." Well, the next day, she sent me a text saying "Dad, I got 100 on that ELA essay using ChatGPT." And I was like, "Wait, you didn't just use ChatGPT. It assisted you. Be careful on how you choose your words." Because that does sound like cheating. If you say it in a certain way, right?
Paul Beckermann 15:51
That's the whole boundary that we're all trying to figure out. You even mentioned it as an author—where is that boundary between helping and doing for us?
Carl Hooker 16:00
Yeah, my book is on Amazon, and also Barnes and Noble right now. But with Amazon, it asks you now when you actually publish your book, how much of this book was written by AI, how much of it did you utilize AI for, and then with the images it asks you that, too. So it's not just the words, but the images. So for the cover of my book, I used Leonardo.ai to actually generate that image, which is a beautiful artistic AI app. I don't use it for schools, because it's also very much unfiltered. So I don't ever suggest it to schools. But the art that comes out of it is just phenomenal. And so I used it to generate the cover. And so of course, when you upload the book, you have to state all that now. So even Amazon's acutely aware of it. And I'm sure more and more as we go forward. I'm sure you guys have read something in the last few months where you're reading the blog, and you're like, "This looks an awful lot like someone just copied and pasted whatever ChatGPT said." You don't even need an AI cheat detector. You can just kind of tell now, because it always does like bullets, or, "Here are five things you should know about that." Right? It seems like that's the goal. I think teachers are gonna be more keen to that. When I was a teacher, I called it "parent intelligence." I knew when a parent was actually doing the work for them. That's PI, not AI. But you can tell, right? You'd be like, "Oh, I know exactly what this is. That's not your work. You could never have written that in your life." So I think as we get more adaptable as teachers, we'll start to pick up on that, too.
Paul Beckermann 17:23
Yeah, if the email starts with, "I hope this email finds you well."
Carl Hooker 17:29
Dead giveaway. And it signs off with "best" or, "be well."
Paul Beckermann 17:33
Yeah, exactly. They're all the same. They're all the same. So you mentioned that teachers are still learning this themselves. And in a lot of districts, teachers haven't had a chance to get the training with all the other things that are on the list to learn. So if you are going into another district, what are the things that teachers do need to be aware of and kind of need to know in this journey?
Carl Hooker 17:58
Yeah, when you go into a room of teachers, I feel like it's about 30 to 35% who have used these tools. So that's still not a lot. But what I would let them do first is get all the concerns out. So the first thing I always do in any training with AI is say, "Tell me everything you're worried about." And I'll tell you, it varies from academic integrity to "Terminator" and the world's gonna be taken over, which is always the extreme, and it's always example to use. But once we get that off our chest, I say, "Now let's look at some of the examples of how it can be used." And then you gotta give them time to play with it. Because I'm a person who likes to use it. You can tell me about it all day, or show me a video, but I need to get in there and actually play with it. So I usually start by giving them something fun to do. So I'll say, "Put four things that are in your frigerator right now and write a recipe for those four things." Or it's New Years, you got New Year's resolutions, you're trying to do a workout plan—have it generate a workout plan and see what it comes up with. And then usually what they do is they see that it's decent, not great, but pretty decent. I'm like, "Well, it's a good start." And that's what I try to tell them. It's a rough draft; it's a good starting point. When you talk about large language models, with the image generators, it's a little bit different. I do activities with those two; the best one that I've done recently is try to get an image generator to draw a picture of you. Actually go in, type in the text describing yourself, and seeing what it generates. First of all, it's a hard challenge, because you have to really look at yourself in the mirror and say "Oh, okay. I'm a little bigger than that, or I have a little less hair, or I have a little more gray in my beard." So going in and actually saying it and then I tell them about re-prompting and editing your prompts and fine tuning your prompts and getting that image just right. And so that's a fun way of just seeing it. They laugh at it. And then of course, they all look around the room and say, "Oh, this must be Mary," or "That must be John." But I think it's a good way to get them started. And then we travel down the bias paths. I wrote a whole chapter in the book about bias. I think that's something they all have to be aware of—the bias that comes with it. And by the way, image generators are probably some of the worst when you talk about bias because when you write in, "Draw a nurse taking care of a robot," the robots all look very different. But let me ask you three, what do you think the nurses look like?
Rena Clark 20:11
White woman.
Carl Hooker 20:13
What gender? Woman is correct. Not white, though. You'd be surprised. They're all the same race, Filipino Asian descent, usually. And so you'll see this. They've started to tweak some of the algorithms now, but for the first few times I did it... By the way, when I said basketball player reading a book, what do you think?
Winston Benjamin 20:30
Black male.
Carl Hooker 20:31
Tall, black male reading a book? Yep. The book is always different. The person, never different. It's a great conversation to have with teachers about why the tool itself isn't bias. But it's taking our own inherent biases that we put on the internet, and then producing that for us. So being aware of it, and then figuring out how we can adjust to that bias, I think is important. It always touches off a good conversation with teachers when we when we bring it up.
Winston Benjamin 20:55
I appreciate the way that you're supporting teachers and engaging with it. And the examples you gave of how to learn about AI seem like they would be beneficial for students. But the question that I'm going to ask you, and I think it goes back to more of your way you ask your daughter to talk about her work, is how do we help students learn to use AI to co-create with them, rather than an AI creating for them?Especially in writing and research again, because kids speak as if they are just actors in it, not the person generating it, right? Like your daughter. So what are ways that we can support students thinking about it co-creating with them?
Carl Hooker 21:41
I'll tell you, first of all, the way not to do it. And the way not to do it is to say you're not allowed to use it. Because once you tell a kid they're not allowed to use it, they're all going to use it. So I'll be fair, I mean, institutions like New York City, public, DOE, they blocked it initially. But to be fair to them, a couple months later, they came back and said, "Okay, here are some structures that we could use." So I think having that guided use and actually having them use it. I use a story of a friend of mine. I wrote this in the book, too, and I have it on one of my podcasts. He's a professor at USC who does public diplomacy, Dr. Nick Cole, and in his class for the last 5 years, he's had them write this paper, a position piece from another country politicking in the United States about climate change or some other topic. And he has them create this position piece. What he did this year is he said, "I want you all to use AI, and I'm gonna force you all to use it." So they all had to use ChatGPT to write out their paper. And then he said, "Now turn it in, no edits, no nothing." And so they all turned it in. He goes, "Okay, you can all get a B minus right now, or you can make it an A paper." And if you want to make it an A paper, you really need to know the country, you need to know the topic, you need to know the stuff that AI doesn't know. You need to know where the biases were, where the misinformation is, because there's a lot of misinformation that comes out of it. And he said, as a result of that, the papers he got were a lot higher quality, the college students actually knew a lot more about the topic than they would have the previous years when they just went and did some research, wrote a paper, turned it in. They had to actually be smarter than AI. So I love that example of leaning into it a little bit. So I think we're going to have to do that with students to say, "Okay, lean into it, use it for this, call out the spots where you're using it." And not just for the written stuff. I know you're talking about writing and research, but I also know a high school teacher who teaches photography. And he said to me, "This is going to ruin everything. I can't tell what's real, what's not real." And I said, "What do you mean?" And he showed me a video of a kid taking a picture, and then adapting it using generative fill through Adobe Firefly. And he's like, "That's not a real photo." And he goes, "I'm just gonna flunk him." And I said, "Wait! Or you could say, 'Turn in your original photo, the one you originally took, and I'm gonna grade you for originality on that. But I also want you to turn in your AI assisted photo, and I'm going to grade you for the way that you've enhanced the photo.'" Because, let's be honest, Photoshop has been around for three decades. So, I mean, why not lean into that and say, "I'm gonna give you points for originality here and points for AI assistance here, and then use the two as a combined grade," versus the, "Well, that's not a real photo. So I'm just going to flunk you." And I think it's reframing some of our thoughts around that. And then, again, giving points for originality. And then lastly, I'll just say, the one thing that the kids can't cheat is process. You can't use AI to cheat process. So reflecting on how you learned it, what did you learn about it? What were the emotional impacts of what it felt? What was your opinion? How did your opinion change when you started this project? Those kinds of questions that AI can't answer are the ones that I would be doing as a teacher and saying, "Okay, I'm gonna give you a grade for your final product. But I'm also going to give you a grade for how your process of actually learning this information went." And I think that's probably the biggest shift that's going to have to happen. If you just say, "Turn in a five-paragraph essay," then yes, there's going to be a likelihood that kids are just gonna use AI to do it. But if you say "Turn in that essay, but also tell me about the process of what you did to research it. What did you learn from it? What was the takeaway? What was something that surprised you about it?" Things like that. That's a little bit harder for AI to cheat on at this time. So I'd say that balance of process and product is huge.
Winston Benjamin 25:04
So we usually have a T-shirt thing, and you can't cheat the process. I was gonna say, "Oh!"
Carl Hooker 25:12
Yes, that's it. It's definitely a T-shirt moment.
Rena Clark 25:15
And we can do an AI-generated photo to go with it.
Carl Hooker 25:21
I have a shirt that says CarlGPT. I didn't wear it for the show. But I do have a shirt says CarlGPT. I'm generating my own AI bot right now. And it's going to help me answer all the emails that I get from spam. I'm working on spamming spammers.
Rena Clark 25:38
I appreciate that.
Paul Beckermann 25:41
It's like keeping those marketing phone callers on the phone for too long.
Carl Hooker 25:44
I do it too. I think that's a pastime. I'm gonna keep texting back and forth. "What are you talking about? I don't know what you mean. Where do I send the gift card? What do you mean?"
Rena Clark 25:55
I really appreciate in your book, and even in this conversation, we've been talking about learning best practices. And not just about any specific tool. So I'm just wondering if you have any favorite tools, you've mentioned a few. And what's captured your own interest and imagination.
Carl Hooker 26:20
You could ask me this question in two weeks and it'll be different. Because this is just, again, the world we're in with the explosion of these apps right now. The latest one I'm into right now is HeyGen. So it's a video capturing tool tool. Essentially, what you do is you take 2 minutes or more a video of you. You can either record it right on the spot, or in my case, I took some previous podcasts or webinars I've done and uploaded that video content into it. And then I can use text to generate videos of me talking and it looks absolutely 100% spot on. I can't even tell it's not me. In fact, I've showed it to people, and they're like, "That was just you recording yourself." I was like, "No, it wasn't me. And here's how I can prove it." Because HeyGen also lets you do real-time translation. And so then it could be me speaking in Mandarin, me speaking in Spanish, and my mouth actually changes to form the words. And we gave it to a teacher who knew German. And I showed him. He was like, "You know, it's about a seven out of 10 in terms of how accurate the translation is." It wasn't that the words were incorrect. It was just, he could tell that my mouth wasn't perfect on some of the formation. That threw him off. You remember the Japanese martial arts films of the late 60s? Similar, like your mouth wasn't quite in tune. But it's getting closer. And what I love with that is that gives us potential then to say, "Okay, you know, I have 22 different languages spoken in my school district. I can all of a sudden send out a message to all 22 of them in their specific language with me talking." So it gives me at least a little bridge of some of that gap. Maybe in the past you said, "Well, if you don't speak English, sorry. I guess you're just gonna have to figure out what I'm saying." So now, that gives you a better chance, and I feel like that's gonna be a lot more inclusive. So I'm excited. I just started playing with it. I haven't paid for it yet, but I'm pretty sure I will in the next week, because again, I just hand the credit card over when a new tool comes out. But that, to me, is one of the most exciting ones. And I think it has a lot of good potential. Which, by the way, it also has potential for negative consequences. Because the first thing I thought of was, some kid is gonna make their principal say something that they didn't actually say. And you know, that's coming. By the time the school year is over, that'll have happened at least once. Someone will get wrongfully fired. Some student will get in big trouble because they made their teacher or principal or somebody say something they shouldn't. And we'll have no idea because the deep fake is so real. Not to mention the 2024 election, which is coming at the end of this year. You can only imagine what joy that's going to bring with AI. So those are things that I do concern myself with. That doesn't keep me up at night quite yet. But I'm starting to get a little worried. I want to make sure that we can differentiate.
Paul Beckermann 28:47
Yeah, that disinformation and misinformation is going to be top of the list, I think.
Carl Hooker 28:51
Oh, yeah. I mean, you remember 2016, and what social media did or didn't do—whatever you believe. But I think there was influence. And so this will have influence in a different way. And I think us, because we're kind of into it, we're going to be very keen to that. But we know that there's a huge generation of people that aren't keen to it, that are going to be kind of duped by it. And I think that's going to be the thing that we have to worry about and make sure that we're teaching our kids about. I think our kids are pretty savvy, but that's just that next media/digital literacy skill that we need to teach them.
And to be honest, even those of us who have engaged in it—it's still so early on, it's hard to know if it's real or not. It really is.
Yeah, that's vetting of the sources. Making sure you're getting it from accurate information. There was a principal that was fired a couple days ago. I don't know if you guys have seen the story. It was audio tapes of a Baltimore principal. He was fired because an audio tape was released of him saying very racist things in a meeting that someone obviously was recording as he was going off on people of color. He was going off on Jewish people. And basically he had this whole rant. As I listened to it, it was interesting. I was like, man, he should be fired. And then because I've worked with AI so much, I started thinking, somebody could have easily doctored that. And I think he actually did say it, by the way, because he's kind of come back and said, "Sorry." But I could see now the next thing will be, "It wasn't me; it was the AI me that said it. It wasn't me that actually said it." That's gonna be the new excuse. Like when someone hacks your social media account. "I didn't post that Tweet. It was not me."
Paul Beckermann 30:28
I think you're right. We talked at the very beginning, about teachers' concerns changing a little bit over time, as they've gotten more used to and comfortable with AI. And that creative thinking and critical thinking piece has become a concern for teachers; they don't want the AI doing all the thinking for them. Do you have any thoughts on how we continue to make sure that students continue to develop their creativity and their critical thinking skills in an AI-infused environment?
Carl Hooker 31:00
Yeah, the example I shared with you about making your own image and how you have to be creative to come up with those images. One thing I've done with staff and high school students is a human intelligence versus artificial intelligence challenge. I'll do a lot of these different activities where we battle against AI to see the answer types. And for example, I'll give you a quick one that's fun to do. It's about a minute long, and I basically say, "Alright, I'm gonna give you 1 minute. You and a partner need to come up with as many items that you could find at a barbecue picnic. And just bounce them off each other, count them as you go. Ready, set, go." And I hit the timer, and it goes, and they go, and they list. The highest I've ever heard a group get to is 42, which is still a lot in 60 seconds. I mean, that's a lot of items. Average is around 20-ish. And then I say, "Okay, now I'm gonna ask AI to list 50 things found at a barbecue." And as it's listing it, people are watching the list as it grows. They're like, "Oh, there's one. There's one." And then when it's over, I'm like, "What did it miss? What is something you came up with that it didn't?" And all the hands go up. And I'm like, "What? It didn't come up with like mosquito bites, sweat, good tunes?" I mean, stuff that's more humanistic. It gave us a list of utensils and barbecue propane. And I said, "That's awesome." So you can see how the infusion of both is going to be so important. You come up with the creative side of things. It's giving you the most generic, basic, what it thinks you want list. And so I think to your question, going back and showing, here's where it started. Challenging students to come up with the creative parts above and beyond what AI can produce right now is pretty easy. I think as AI gets smarter, it'll be harder later on. I think we might have to have parts where we do challenges or mimic some of the things that we think AI might generate. But right now, it's really easy to combat it with our own creative element, because there's things that AI still can't quite process. And I don't know that it ever will, just because we're the ones that are inputting the items. And we still have a layer of creativity in us that can't be reached by technology yet, because really, it's just pulling databases at this point. And it doesn't have all that information.
Winston Benjamin 33:04
So you were starting to explain a little bit of things that kind of started to get you to stay up at night, but not really. But as you think about the future of AI, what excites you, or what's on your mind about the future of AI and education that could be like, "Yo, this is really digital jazz." I don't know if you've ever seen Tron 2, but you know what I mean?
Carl Hooker 33:27
I like that. That's good. We talked about personalized learning—but I also think there's an equity conversation that plays a role here that we don't hear enough. And when we talk about cheating, I bring this up a lot. Certain kids get access to tutors, because they have the money to do so. So they can pay for a tutor to write their college essay. We don't really judge that. We don't say that's cheating. Certain kids have access to a mom or dad at home that aren't working or aren't full time. They're there for them; they're helping them with their work and they help them turn in their work. Again, we don't judge them. Certain students have access to really good teachers that give them small group and one-on-one instruction. Not all kids have access to that. So for the first time that I can think of, we're actually gonna have an opportunity for kids that don't have access to those humanistic things to actually have that peer to help them evaluate, help them revise, help give them feedback, help tutor them, that they didn't have before. So I'm excited for the potential of that. Of course the downside of that is if we block it, or certain schools block it, then we've just started to widen the gap yet again. So I'm hopeful for that. We can finally say we're leveling the playing field. And I think the biggest a-ha! moment for me was last June when Mark Zuckerberg and Elon Musk and the people that make billions of dollars said, "Woah, woah. Slow down; we shouldn't use it." And I was like, when the millionaires are telling you to slow down, it's because they're worried that we're going to take over. So let's keep using it; let's keep accelerating, because they're obviously concerned that something is going to take over their stance in the world. And so this is an opportunity for people that didn't have opportunity before. So I'm excited for it. Again, cost is going to play a role in that too. Most of them are free right now. But eventually, they're gonna start charging. And when they do, that's when we're gonna have to really keep an eye on it. Which is why I like tools like Khanmigo, which uses ChatGPT. Khan Academy said it's free for life. So I'm hopeful that that'll be true for students, so students have access to it.
Well, it's that time. I feel like we could talk about this for much longer, but we got to move into our toolkit. So it's time to ask the question, "What's in your toolkit?"
Student 35:51
Check it out. Check it out. Check it out.
Rena Clark 36:02
Winston.
Winston Benjamin 36:04
I'm gonna say this. You—and I mean the queen's "you," meaning all of us teachers—remember that teachers can model to students how to use AI because they're getting so much information about its use outside that could be wrong, uninformed, create a lot of problems. So as teachers, realizing that we can be the source of critical engagement with AI to support student learning. So remember, we have a lot of control.
Rena Clark 36:36
All right, Paul.
Paul Beckermann 36:38
I would say go to Carl's website. Go to carlhooker.com. He's got all his books on there. You can find them there. He's got his blog on there. And he's got links to a couple podcasts that he's involved. Great resources on there. Check out carlhooker.com.
Rena Clark 36:54
All right, I have so many things, but I'm thinking with the teachers I'm working with right now, a lot of them are just discovering magicschool.ai and some of just trying some different things on there. And as Carl mentioned, really measuring some of the things that can be produced on there compared to what they would do on their own, and making some informed decisions about what is going to help and what's actually going to cause more stress right now.
Paul Beckermann 37:27
Hey, Carl, we'll let you play along too. What would you like to drop in our toolkit?
Carl Hooker 37:32
Well, let me add real quick and then I'll do mine. But I think Twitch was the same as MagicSchool. You know that MagicStudent is coming out soon. But also, the IEP Generator MagicSchool is a time saver. I wish I had that when I was a teacher. For me, it's gonna be similar to Winston. I think human compassion. And I mentioned earlier at the outset about my dad; he had a will to live like nobody else. And I think what he tried to do with his heart was create this—they have this machine called an LVAD that basically pumps his heart for him and uses artificial intelligence machine learning to keep them alive a little bit longer. And without that kind of technology and the advancement of AI and machine learning, my youngest daughter who's 10 would never have known her grandfather. He would have passed when she was around 3 years old. Because of the advancements of technology like AI, I think there's a lot of potential for us to be around, be more present, have more time, be more human, with those around us. So I always say human compassion will never surpass artificial intelligence.
Paul Beckermann 38:31
Another T-shirt.
Winston Benjamin 38:35
Right. So in this segment, what's the last thing that's running around in your mind that you want to share? What's something you're walking away from this conversation with still puzzling?
Rena Clark 38:51
It's time for that one thing.
Paul Beckermann 39:01
It's that one thing.
Winston Benjamin 39:04
Paul? Rena? Who'd like to start, Paul?
Paul Beckermann 39:08
You know, I do like the idea of AI as a learning partner. I think we need to figure out how we work together. Just like we learn to work in groups, we learn to work with collaborators; we need to learn what the role of AI will be in that collaboration process to make us better and make us more efficient, but not suck the humaneness and the creativity out of the process. I think that's going to be an evolution that we're gonna have to work through together.
Rena Clark 39:34
I was thinking very similarly to Paul. I love this. I like the stamping of HI human intelligence. So HI versus AI and just making sure that we're transparent and providing opportunities for our educators and for our students to be able to dig into that and understand and have experiences like you talked about with the barbecue or other things. Or even when you went into your daughter's classroom, I love that example of creating a story and then having that comparison. So, for me, it's providing opportunities for that HI versus AI, comparison and learning.
Winston Benjamin 40:24
I appreciate that. Because that's what I've been puzzling through and thinking. Yeah, teachers, how do you stop students from actually being creative? That could be a thing, right? Just like the student who took a picture, they're engaging with creativity wanting to do their work. So how are you preventing students to engage with something in a way that's more productive, that allows them to realize that they're co-creating? Sometimes we can be the block. So how do you not be the black? Carl, I'm gonna throw it to you. What's one of the things that you're still thinking about?
Carl Hooker 41:00
Right now, there's a lot of pressure on states and local governments to come up with policies and guidelines and guardrails around AI. I just saw one this morning. This kind of goes to what Rena was just talking about. The state of Washington just released this morning what they call their human-centered artificial intelligence guidelines. I love that kind of terminology of human-centered, which plays into it a little bit. And I think they're the fifth state that I know of that's come out with a guideline. I think by the time this semester is over, by the end of the school year, most states will have some form of that out. It just needs to be flexible. And again, I like the idea of keeping the human in the center. AI is the assistant. AI is the feedback. AI is the tutor, the thought partner. But it's not the learner; the learner is still the most important part.
Paul Beckermann 41:43
Well, we've covered a lot of ground today, and we really appreciate you being here with us, Carl. Thanks for joining us.
Carl Hooker 41:50
Thank you so much for having me.
Paul Beckermann 41:51
I want to remind our listeners that Carl's new book, Learning Evolution: The New Era of AI in the Classroom, is available. Where's the best place to get it, Carl?
Carl Hooker 42:01
You can go to Amazon or Barnes and Noble mrhook.it/ai is a short URL that I use to direct them so they don't have to go searching for it, but it is in Amazon, as a Kindle or paperback.
Paul Beckermann 42:16
Awesome. Go check it out, and I look forward to seeing where all this goes.
Rena Clark 42:24
Thanks for listening to Unpacking Education.
Winston Benjamin 42:27
We invite you to visit us at avidopenaccess.org, where you can discover resources to support student agency, equity, and academic tenacity to create a classroom for future-ready learners.
Paul Beckermann 42:42
We'll be back here next Wednesday for a fresh episode of Unpacking Education.
Rena Clark 42:46
And remember, go forth and be awesome.
Winston Benjamin 42:49
Thank you for all you do.
Paul Beckermann 42:51
You make a difference.
Transcribed by https://otter.ai