Unpacking Education & Tech Talk For Teachers

The Promises and Perils of AI in Education, with Ken Shelton

AVID Open Access Season 4 Episode 66

In this episode of Unpacking Education, we sit down with renowned education technology advocate and author Ken Shelton to discuss his latest book, The Promises and Perils of AI in Education. Ken brings his years of experience as a middle school teacher, technology lab leader, and global speaker to a thought-provoking conversation about the transformative potential—and challenges—of artificial intelligence in education. Ken shares how we must engage critically with AI and its implementation to make sure it is used ethically and effectively in our schools and classrooms. Visit AVID Open Access to learn more.

Ken Shelton 0:00 Tell me an instance for you right now in your life where you're okay with someone else speaking for you. And of course, the kids are never, and I'm so then, is it okay for me to let the large language model do my writing for me? That's my voice, right? Is it okay? And of course, they were no, no, no, no, no. And I said, Now let's take a look at what it put out.

Winston Benjamin 0:22 The topic for today's podcast is the promise and the perils of AI in education with Ken Shelton. Unpacking Education is brought to you by AVID. Avid believes that every learner can develop student agency. To learn more about Avid, visit their website at avid.org.

Rena Clark 0:45 Welcome to Unpacking Education, the podcast where we explore current issues and best practices in education. I'm Rena Clark.

Paul Beckermann 0:55 I'm Paul Beckerman.

Winston Benjamin 0:57 And I'm Winston Benjamin. We are educators, and

Paul Beckermann 1:01 we're here to share insights and actionable strategies.

Transition Music with Rena's Children 1:05 Education is our passport to the future.

Winston Benjamin 1:10 Our quote for today is from our guest Ken, from the introduction of his new book that he co-authored with D. Lanier, The Promises and Perils of AI in Education: Ethics and Equity Have Entered the Chat. He writes, "There's a whole lot of promise around AI in education. There's also a whole lot of perils and unpredictables." What are y'all thinking about?

Rena Clark 1:39 That quote, it always, having an economist father, I always think of that cost-benefit analysis, and we always need to be aware of the cost, even when we bring the benefit. So I think AI definitely fits into that category.

It offers incredible opportunities we've talked about on here before: personalized learning, task optimization, support and differentiation, UDL, expanding student access, giving access to all students, not just some, saving time.

However, we need to know and always be aware that AI can also amplify the inequities that already exist because of all the embedded biases within the data it's pulling from—bias data, different kinds of data—and so it's actually also amplifying biases as well. So we also need to provide those opportunities for students and educators to think critically and engage in that and be aware of that.

The other thing I've noticed, having my own children that are teenage age, is we also still need to provide lots of opportunities for them to think critically and engage in those processes, the writing processes, and develop those skills, and not rely overly on AI, and have human conversations, not just an AI tutor be the only thing they interact with as well. So it's, "Yay, and be mindful."

Paul Beckermann 3:11 Your dad would be very proud, Rena, nice job. I earned dad points here too. I totally agree with you.

And another word that I was kind of hanging on was that last word: "unpredictables." AI is developing and changing so rapidly it's hard to know what's coming next, and that contributes to this unpredictability of how AI is going to impact society, including education, right? We just don't know for sure yet.

In my opinion, that reinforces the need to stay plugged into the conversation and keep those changes on our radar, so that we can respond with an informed mindset, because even if the path forward is unclear, if we're educated and informed, we can make better decisions as we move ahead. So that's part of the reason I'm excited for our conversation today with our guest.

Winston Benjamin 4:02 I love that, Paul. I really appreciate the idea of intentionality, because that's been the biggest piece of trying to move forward. How to support students across all kinds of learning journeys, all kinds of educational experiences, is to really take the time to really learn enough to make proper decisions and proper choices.

Our guest today, right? He's going to come and help us have a conversation really about what does it mean? How is this thing going to impact us and possibly benefit us in our work? And so I just really wanted to thank our guest today, Ken Shelton. Ken is an award-winning educator and bestselling author. Big ups, right? His most recent book that he co-wrote, co-authored with D. Lanier, is titled The Promises and Perils of AI and Education: Ethics and Equity Have Entered the Chat. Welcome, Ken. Thank you so much for taking your time with us.

Ken Shelton 4:55 Thank you so much for having me. I just have to throw out there, since we're on an economics track right now—part of which I hope we will discuss—part of the concern that we should have around artificial intelligence and education is the scarcity principle.

Winston Benjamin 5:19 Okay, okay, the brain is word I think I know, but

Ken Shelton 5:30 I mean, look, what we could really say there: there is a common saying that at least I know I grew up with, and that I bring up all the time now, where I tell people again, if you want to apply an economics lens to it, is follow the money. So in this case, with AI, it literally is a scarcity principle, and that's part of the things that we should know that ideally will help inform some of the decisions that we make, in addition to the accountability that ideally we can propagate.

Winston Benjamin 6:08 I appreciate that you are starting us off with a, "Hey, listen, there's some conversations that I really want you to think deeply about." But a lot of our people who listen to us are teachers, are in the education space, in the classroom, trying to figure out how to do all of this. Sometimes, when people come in, we like to help our listeners ground themselves in our speakers. So, can you tell us a little bit about yourself? Maybe introduce your work, your co-author? Give us a little bit of how and why.

Ken Shelton 6:42 Yeah, so I've been around education for quite a while. I taught primarily middle school in Los Angeles Unified School District. My work is centered around, for sure, it's been centered around equity. I didn't just start talking about equity after March of 2020, and I've been around EdTech this entire century, and so I carried that with me throughout my time teaching in various subject matters. My last 11 years in the classroom was actually in a technology lab, and that's also precisely why I'm a vocal advocate for career and technical education opportunities as well.

After I left the classroom—maybe those are reasons that should go on a different podcast—I will just say, for the audience, in a nutshell, all of the horrors, hazards, and things that can compromise the social well-being of a Black male educator. That's my career. And so that's why I left the classroom. I didn't want to, but I had to, literally, for my own survival.

That, in addition to me doing public speaking and realizing that, from my background—I played sports in college, and I tried to play professionally, and didn't make it, but growing up in LA, I did acting and modeling for a long time, and I actually had considered studying theater in college, but theater and football don't mix. Not going to happen.

I share that because early on, I think it was probably back about 2005, 2006, was when I first got the idea that I wanted to do a presentation at a conference. My point is that that, coupled with me leaving a classroom, meant that I could pursue that component of my educator life with a higher degree of intentionality and vigor, and it has definitely led to me being able to do a lot of things that I say. I am living my best life as an educator because of that.

I've been fortunate to work with a lot of very high-level organizations: state departments of education, ministries of education outside the US, a lot of work with indigenous educators in various countries. My speaking engagements have been fortunate that I've been able to—coming up this year will be numbers 49 and 50—I will have been able to give talks in 50 different countries as well.

Winston Benjamin 9:34 Awesome. Thank you. Awesome.

Ken Shelton 9:35 Yeah, it's a gift. I don't take it lightly, but I also have to be me and speak truth to power when it comes to examining our educational systems and institutions as well. So

Paul Beckermann 9:51 Do you want to mention anything about your co-author, who is D.?

Ken Shelton 9:56 Yeah, so D. Lanier. That's my brother from another mother. He's an educator as well. And he's actually published two books: the book that we co-authored, and he published another book called The Marginalizing Design. He's based in Charlotte, North Carolina. He and I, we just, we have the same approach. He's been around EdTech—that is what connected us. Then, of course, he grew up part of the time in Southern California, and his mother is retired military, so he kind of moved around, but his original roots were here in Southern California, so we connected that way.

Ultimately, our concurrent paths led us to ultimately writing the book that we co-authored, because we've seen this playbook before, and we both literally said to each other, "This is a conversation that we are not going to sit out, and we are not going to let our voices be pushed to the margins." We can bring it up in our talks, but we also, as he and I said, we also need to put it on wax.

That's why we became each other's creative partner, accountability partner, and thought partner in writing the book. So it truly is a collaborative effort, and that's precisely why in our book, you have several chapters where he wrote the intro, several where I wrote the intro, several where we both provided an intro, and we designate whose voice is being heard where. That's part of the catalyst for how we are going to do the audiobook as well. So

Paul Beckermann 11:42 Awesome. Well, let's talk about the book a little bit. In the book, you write about the transformative potential of artificial intelligence and education. So what do you see as some of the potential transformations that can happen there?

Ken Shelton 11:58 To the point that Winston brought up earlier for the audience, let's level set a couple of things. First of all, artificial intelligence refers to a large swath of technologies. It's been around since the mid-'50s. All of us use it, whether we want to or not, and now we say you're either using it or it's for sure being used on you, and not always in a good way.

The whole idea is to recognize that there are three distinct types of artificial intelligence: there is reactive, predictive, and generative.

Your reactive artificial intelligence includes things like voice assistants, spell check. That's why I always laugh when I see where school says, "We're going to block AI." I'm so that means you're getting rid of spell check too, then, right?

You also have predictive, which includes things your subscription platforms. The story I always share is that Spotify. I listen to a lot of '80s music, and I primarily listen to what I believe is the greatest musician to ever live in my lifetime, which is Prince. Because I listen to a lot of Prince, what Spotify does is it recommends Prince plus other '80s genre artists, so it's predicting what I'm likely going to want to listen to.

What's come out that's been around, but it just became available to the masses in November of 2022, is generative artificial intelligence. It is not the previous two types. It is one that essentially looks at patterns, historical patterns, and makes predictions. Based on those historical patterns, it influences the validity and the percentage of frequency of which it would make those predictions. That's generative. As new patterns emerge, it learns those new patterns, and those become part of the technology.

I do think it can... I'll put it to you all this way: I do have the luxury of historical context. I look at—and I even wrote about it in the book a little bit—every time there is a newer technology that becomes available to the masses that is considered a disruptor (things Google search or Wikipedia or many of your Web 2.0 tools), the immediate reaction is always one of two camps. It's always a binary: "I'm all in, and it's amazing," or "We have to stop this. It's going to be bad."

Oftentimes the "it's going to be bad" is a ruse for, "I want to maintain dominion and control over learners." And so therefore, I'm going to come up with all the ways of why we can't or shouldn't use it, versus, "Let me look at this from a critical and analytical perspective and say, 'Okay, is this something that has value that I can use to augment what I'm already doing?'"

For example, common resistance to Google search was, "You can't let the kids use Google search because they're going to Google the answers." I remember that, and my response was, "Maybe we should be asking different questions then." Help me understand the value of asking a question that I can pick up my phone and get the answer in less than three seconds.

As opposed to in that context, it was for me, "Well, you can put anything on the internet. So do we believe everything that's on the internet?" There's a whole new layer of literacy that you got to incorporate because of that, rather than being resistant to it. Because I always say, some of these technologies, if you think they're going to come and go, all you got to do is economics: follow the money. Where is it being invested?

Rena Clark 16:00 We've been focusing a lot on generative AI. In what ways have you seen, or do you see, potential for it to be transformative, specifically generative AI?

Ken Shelton 16:16 I look at generative AI as being transformative, and to me, even the use of that word, transformation, means that it requires a complete shift in many of our learning paradigms. You cannot just recycle what we used to do. So, using generative AI, for example, to generate a whole volume of worksheets, that's not transformative. That's a remix.

Transformative to me would be things personalizing education, but let me be clear: you've got to know your learners before you can begin any sort of process of personalization. We can use it for idea generation, and I'll share a story around that.

About six months ago, I was working with a school district that wanted to target a specific group of students that they at first were identifying these students as underperforming. I cautioned them on doing that, because I said, "If you've got students that are underperforming, then that means you also have students that are over-performing. So rather than using that term, let's just say that they are not appropriately resourced to realize their full potential."

So now let's look at what is the pattern? What is the historical pattern? They looked at their test scores. They looked at, for example, their performances across many subject matters in school. They got feedback from the teachers. The whole idea was, I was asking all these questions because we were building the equivalent of what I call a personalization profile. I don't necessarily need to know the student's name based on their educational records, but I need to know all the details: Where do you identify the gaps? What is the catalyst for those gaps? What mechanisms are using for measuring them? What additional information do we need to have that cannot be quantified numerically?

Once we had all that, I said, "Okay, so now I'm going to take all of this, and I'm going to go into a large language model," and I also want to know what type of intervention or resource mechanisms exist at the school site and within the district. I took this whole thing and then I put it into a large language model. My prompt was probably around five paragraphs long.

Ultimately, the whole idea around using the word transformative was, if you've taken everything you know and everything you've done, and you don't have any additional ideas, this is exactly a reason why I would use this, at a minimum, to generate new ideas. That's what I did. We did that whole thing. I even said, "How are you involving the adult caregivers in this process as well?" We have to assume that the adult caregivers are not educators, so I even added into the prompt things like, "What are some activities that could be done in the home that last 10 minutes, that would be run by an adult caregiver who is not an educator, but can also support what you're doing in the classroom, in the school, and in the district?"

That's an example of, again, a transformation. I didn't rely solely on the AI to do it for me. As you notice, I went through a whole list of questions and information gathering before I went to the AI, and then once we got the response, I'm, "Okay, now, is this something that we can do now, or does this give us a definitive starting point?"

Rena Clark 19:52 I love that example, but I think that really points to the other side of this world. We talk about the potential harms or unintended consequences, because, let's say we're in a different situation, and you do pull in some other things, or don't take the time to look at it, I see potential for harm of rolling out AI without really thinking through it or support.

I know in my role, I'm supporting high school educators as an instructional facilitator, and it's been interesting with the shift, just what we do now as AI is being released to our high school students, and they're having access. Echo: they've already had access in their pocket, maybe not on their computer at school, but many of them have already had access, and teachers are literally, "What do I do?" So thinking about that, are there any other potential harms or unintended consequences that we should maybe have at our forethought as we continue to go forward?

Ken Shelton 21:00 The three biggest harms that I always bring up are number one, over-reliance. I just read recently—and I agree with this—that one of the concerns that needs to be brought up in education is not just the over-reliance, but that the over-reliance can lead to what I would describe as intellectual laziness. It can also lead to—and this is where media literacy is essential here—the propagation and the amplification of disinformation.

And then also, continuing on bias. I know a lot of people. It's funny how now people, "Wow, there's bias!" Yeah, some of us have been saying it for a couple of years now. Some of the bias components within generative AI systems is obvious.

A funny thing I do with that: whenever I speak at a conference, I go into the vendor hall and do a social experiment. I'm hoping to do this with a couple of upcoming conferences with my co-author, because I always call him afterwards. I'm, "Well, I got a story for you," because I know how deeply bias is baked into most of the systems, pretty much all of the systems. When I go through the vendor hall, I usually say... First of all, I make a mental note of how many new vendors there are that have AI platforms or features or functionality that didn't exist two years ago. I also make a mental note of how many companies did exist, but now they are amplifying and advertising the AI functionality of what they currently have.

But I go around, and I've done this several times now, where I go to the vendor and I just say, "Hey, I'll make a friendly wager with you. I bet you I can expose and break your system in less than 15 seconds. And if I do it, you have to promise that you will make your program available to an economically under-resourced school for three months for free."

Of course, they don't take me up on the bet, but I pique their interest, and sure enough, so far, I'm batting 1,000, because if you understand how this works—and this is where I always talk about it's important to have a systems thinking approach. If you understand how the systems are designed, the computational component and the human component, for me, it's easy, easy, easy, easy.

The concern that I would have around that is not so much that it's in the systems, it's not having the skill set to decrease the likelihood of the bias ending up in the results. How do we—what is the necessary...? And this is why I'm ardently opposed to any draconian measures of block, deny, and refuse or some sort of descriptive access: that's a skill set that you need to have. It's no different than Google search, where if you understand how search works and how search technology works. Ideally, you can use syntax so that you get the best and most useful and credible result, not just on the first page, but within the first few links. But also recognize that the first link on your Google search page is an ad, because Google is an ad company. That's important information to know going in.

Same thing with the artificial intelligence. If you know how that works, then I know things. My general list I share with folks is: more context equals better output. If your inputs are very broad, very general, and you basically, what I say is, if you allow artificial intelligence to fill in more blanks than you, the bias is going to end up in the results, for sure.

The very last thing for me, a concern, also, is the lack of transparency, as well as accountability on the AI companies and their data governance practices. It's funny, but not in a humorous way, but it's funny how a lot of times when I speak with chief technology officers, and they tell me, "Well, we took a contract with this one," I'm, "Did you check out the terms of service?" Some do, to their credit, and some are, "Yeah." I'm, "Well, there's a reason why it's as long as it is, and there's a lot of things in it."

The most important things you want to look for are: Does your use of that model also train that model? So in other words, they don't need to capture my name, but they can capture my usage. I should know that. In some cases, you might be okay with that, but burying it in terms of service and not being transparent about that, to me, now runs the risk of more of a nefarious type situation. But also, where are you getting your data sets from? And then what I always ask also is: What data cleansing and algorithmic bias weighting measures do you have in place?

Winston Benjamin 26:15 I feel I understand your "why" a little bit, as a person who really doesn't want my students to be left out of the next places where educational opportunities, economic opportunities, just worldwide opportunities are going. I can find that. But you titled the subtitle of your book as Ethics and Equity, right? So I want to ask, Why do you urge us to focus on equity and ethical implementation of AI in schools? You mentioned that in the story that you described about the steps you went through to design some way of engaging with communities and engaging with student learners. I never thought of the under-performing and the over-performing contrast before. Love that. So could you go into a little bit about why you focus on us being ethical as we implement an equitable focus in AI?

Ken Shelton 27:33 So yeah, and thank you for asking that question. Within the context of AI, we actually have in our book three ethical questions, and these were questions that D. carries with him from his college business ethics classes.

The three questions are the following: Question number one is, Is it against the law? I always add to that, "Is it against the law, or does it violate explicit school or district policy?" And the key word is explicit, because oftentimes, "Well, we kind of have it there." You can say it's implied then, but then I can come up with lots of counter-arguments for that. So one: Does it do that?

The second one and the third one are questions I love posing with student focus groups, and this goes into the whole realm of academic integrity.

The second question is: How would I feel if someone did this to me? I've done that with students where I say, for example, there was a group of students where I was doing a training for the staff later in the day, but I really, really, really wanted to meet with a focus group of students. At this particular school site, I said to them, "Hey, I know you all had an assignment to write a five-paragraph summary of The Great Gatsby." I said, "Check this out. Y'all have used AI before." And they were, I'm, "Nah, trust me, I was a teacher for a long time. I know you went and you started playing around with it to see what it could do. Okay, let's just be real here." And of course, then the high school kids start laughing.

I went into one of the large language models, and I said, "Generate five-paragraph summary of The Great Gatsby with a particular focus on the theme of meritocracy." I, "Oh, wait, I got to add one more thing. Write it at an 11th grade AP English level, AP or honors level, English Honors level." I, "because you got to tell it that."

I asked the student, "So you all spent a lot of time working on your essay. How would you feel if I were one of your classmates and I did exactly what you just saw me do in less than five minutes, really I did in less than two minutes, and you spent all that time doing? How would you feel about that?" And of course, their response is, "Well, it's not fair. That's not cool. You didn't do it yourself. That's cheating. That's plagiarism." I, "So let's have a conversation about that." That's an ethical question. I was, "This is something we need to have." First of all, I said, "All perspectives of fairness is subjective." We went into more detail with that. So that was the second question.

The third question is: Am I sacrificing long-term benefits for short-term gains? I asked the students, "What long-term benefit am I sacrificing if I do this?" And I said, "I'm going to add another layer to this. Tell me an instance for you right now in your life where you're okay with someone else speaking for you." And of course, the kids are, "Never." I'm, "So then, is it okay for me to let the large language model do my writing for me? That's my voice, right? Is it okay?" And of course, they were, "No, no, no, no, no."

I said, "Now let's take a look at what it put out." We looked at it. I was, "Would one of you turn this in?" And they were, "Nah." The main thing was no, because, "It doesn't sound me." I said, "Right now." Then I started laughing. The kids were, "What's so funny?" I said, "Well, it's good that I get to do training with your teachers later today, because you all got this assignment, right?"

They said, "Yes." They said, "I got this exact same assignment in the fall of 1986." So this goes back to what we were talking about earlier, with the whole transformation.

What I shared with the teachers was, here's how I would do it, and I've actually designed a number of chatbots around this. Take that same situation, and now I would say, "I want to take your knowledge and your understanding of The Great Gatsby, particularly meritocracy, and I want you to synthesize that into: Have you yourself been in a situation that you knew was a meritocratic structure? How did you feel about that structure? Was it fair? If it wasn't fair, what would have made it fair? Was it equitable? Did you have access to the same resources as others?"

You see how this is the conversations that need to be occurring in our educational spaces. The constant disposition of compliance and control, it's not sustainable.

That's the academic part. From a higher level educational part, the ethical component to me is, again, the ways in which the data is governed and compiled. Think about how many artists have their works within a lot of AI systems, and they didn't get their knowledge without their knowledge or their consent. Even when it comes to things these, let's just say these AI detectors, where I asked folks, I'm, "Those are unethical." The first question I would ask you is, if you use any one of these AI detectors, if I were a student in your class, or I were a parent or adult caregiver of a child in your class, the very first question I would ask you is, "Show me the knowledge and consent form that I signed, acknowledging that you're going to take my intellectual property that is copyrighted by default, and put it into that system." You see, this is the whole ethical piece that needs to occur.

The equity part is around the fact that it's not a coincidence, and it's not a surprise that I would say all of 2023, I could predict with a high degree of certainty, based on the media and social class of the students, which school districts were blocked by default and which were not. All that did, and does, is increase an already absurd digital divide.

Paul Beckermann 34:06 Let's talk about the digital divide. That's been a term or phrase that's been thrown around for a long time. It seems to me that it morphs over time a little bit in what it means. What does it mean now in the age of AI?

Ken Shelton 34:25 For the listeners, for a little historical context: as I shared earlier, being around EdTech, I would say from 2010 through, for sure, March of 2020, it was a luxury, and was considered to be a luxury for a school district to be one-to-one with the device. Back then it was, "Well, we can't afford to go one-to-one. We don't have the infrastructure. It costs too much money to acquire the devices," yada yada yada.

Then, auto-magically, from March of 2020 through September of 2020, a bunch of school districts found extra, extra coins in between the cushions of the sofa, and all of a sudden, now they've got all this technology. I will say that I don't hold it against them, because in some cases, they needed something drastic, a lockdown, to see what those of us who've been advocating for technology access know: one, we weren't making it up; and two, it should be foundational, not a luxury item.

It also exposed a lack of a robust and accessible, economically as well as geographically accessible, broadband infrastructure. On a side note, unfortunately, recently, there was an Appeals Court decision made that pretty much kneecapped net neutrality. It wasn't the right decision for us as consumers, and it certainly wasn't the right decision for education.

Nevertheless, when it comes to the artificial intelligence thing, you have to keep in mind all the following things: One: What is the purpose behind a policy of which the default is to block and ask questions later? Two: What is the plan to provide responsive and relevant professional growth supports to the staff, the students, and the school or school system community, so the adult caregivers as well?

You have to keep in mind that there are several barriers to entry. One, you've got to have the right technology to be able to use it. Two, it's going to come with a cost. But then three, it's also the divide that includes usage.

One of the key digital divides I bring up all the time is: access isn't enough. I can have access to a device, but if I'm only using that device to fill out digitized worksheets, then that's still—you have not closed the digital divide. You've provided some degree of a bridge, but that's not closing it. And we all know what happens to bridges as they get older: they can collapse.

The whole idea around artificial intelligence is to recognize that it is here to stay, it is not going to go away. There has to be a multi-tiered and multi-faceted approach on how we use it and how we implement it. Supporting teachers in just simply automating worksheets, or automation... I would say automation and efficiency run counter to equity and personalization.

Paul Beckermann 37:49 Do you want to say a little more about that?

Ken Shelton 37:53 What's one of the top selling points around AI specifically in education? "It'll save you time."

I don't even get into an argument about time savings. I always say, before we accept that as a selling point that we're going to write a check for, or we're going to buy what they're selling, my first question is: What are you doing with the time you actually have? Let's look at that first. How much time is taken from, especially, classroom educators for clerical things and for things that have nothing to do with the essential functions of what they signed up to do? Let's look at those first.

Because if you rely solely on automation—and again, let's go back to one of the concerns I have—it runs the very risk of becoming an over-reliance for one, and two, let's talk about the bias. I was working with a school system where they were looking at acquiring an AI system that would automate class enrollment, because that's an arduous task that counselors and assistant principals have to do. They got to set up a master schedule. They got to make sure classes are balanced, and all those sorts of things.

I said, "Hey, I get it. But before you do that, I want to see the historical data of how you've enrolled students. I want to see which students that are emerging multilingual students, what classes have they been enrolled in? What students, based on race, social class, and language, are disproportionately higher redesignated for diverse learner services, commonly referred to as Special Ed? What has been the distribution of students based on race, ethnicity, language spoken at home, and social class in your higher, rigorous academic programs, AP classes?"

I said, "I'm going to even go one step further: based on the extracurricular courses that are offered, what's been the distribution in those?" When I said that, they were, "Oh." I'm, "So you do realize that all the AI system is going to do is automate that?"

I get the need for the time savings. But let's look at what you're doing with the time you have. And if you want this AI system to help you with that, you need to look at this, because that is highly likely to be in the data sets that the system is built upon. So to expect it to do anything different means that you don't have a systems thinking approach first.

There are ways to mitigate that. There are ways to include the design of the system to say, "Hey, there's been a disproportionately higher percentage of, for example, historically, school and marginalized students who have been denied access to a higher, rigorous academic program. So we want the system to decrease that number, so that there is a more equitable distribution across things those types of courses and their accessibility."

Rena Clark 41:14 That's kind of back to your point: if you fill in the blanks, there's less room for the bias to fill it in for you. Correct. I appreciate that.

You're really great at systems thinking, higher-level thinking. I think about systems all the time. I'm thinking of some of our classroom teachers listening to this, going, "Oh, my goodness. What might I do? What might be some steps for me in my classroom, if I'm an ELA teacher or elementary teacher? What can I do to support that, or to create a more equitable environment while using AI in my classroom?"

Ken Shelton 41:53 For the classroom teachers, for me, the simplest approach I always bring up is: What is the simplest solution to the problem you're trying to solve that you, the human, can't solve yourself? Then, what is your expectation of AI being able to do that for you? Start off there.

I get it. For a lot of teachers, technology can be overwhelming, especially when you consider how many plates a teacher has spinning in the air: I have a lesson plan, I have to grade, I have to make sure the students' needs are met, and I have all these other things that are competing for my time, and this is one thing that might be able to help me regain some of that. My whole thing is, I think the first step is the human aspect. What can it do that I can't do myself?

That's why I shared that example of the school district, but I'll share with you all a quick story example with a classroom teacher. I was working with a school, and the classroom teacher was a physics teacher, and one of the challenges, he said, was that he's having trouble helping his students draw a connection between what they're learning in physics and their lived experience, their real life.

I said, "Oh, that's one that we could use AI for." I said, "Tell me about your students. Tell me about students in general, and some specifically. Do they like sports?" "Oh, yeah, students love sports." "What sports specifically do they like?" "Football, basketball, and soccer." I'm, "Okay, great. When they come in, do they talk about pop culture, music, or anything that?" "Yeah, they talk about these things that are out there." "Perfect. So let's start off with a sports example."

We are going to take the standards of your class, your state standards, we're going to attach those to the prompt, because those are in the public domain anyway, so you're not inputting any information that is protected or proprietary. That's part of the ethical component, too.

I helped him; we crafted a prompt, and it was, "I teach 11th and 12th grade physics. I'm having trouble supporting students in drawing a direct connection and correlation between their lived experience and the physics concepts that are in the attached standards." There was more to it, but ultimately, the big part of the prompt was: "Generate an interesting fact and/or hook that incorporates football, basketball, or soccer, with each of the physics standards and the concepts that can be understandable and connectable to a student in the 11th or 12th grade, and generate a new hook every day for the next 30 days."

What it put out was really cool. I said, " what I would do next? I would take that whole list—what he did, and he shared with me later how it was awesome. I said, I'd take that whole list. I'd share. I would do maybe five days at a time. I'd say, 'Hey, tell me what you all think about these. If you're not able to connect physics to your lived experience, here's a way that I got some ideas.'" I said, "Now use that as a thought generator, and say, based on these, what are your thoughts now?" And he said it literally led to all sorts of really cool discussions in class.

That's an example of, "Okay, I've kind of reached a creative rut, and I'm not sure what to do," and so that's when I'm, "Now is when we're going to use AI to fill in that particular gap," and that's what we did. It was awesome. It was great.

I even said what I would do is I would challenge. I'd say, "Hey, we're going to cover these principles within physics coming up. I want you to give some thought to how might that play a role in your life now, or in a dream or a vision you have of yourself for the future?" Oh, that was fun. I wish I could have been there every day to watch the kids' responses to the ideas and the connections.

That's an example for a classroom teacher of even if it's just something as simple as supporting students and connecting what they're learning to their lived experience, which oftentimes is what I would say in one of my keynotes: that's one of the authentic ways to build ownership of learning in students. How do we remove the question of, "Why do I need to learn this?"

Paul Beckermann 46:39 Well, Rena, what time it is?

Rena Clark 46:40 Tool Time. Sorry, I'm just taking notes, and I just want to finish something.

Paul Beckermann 46:45 Rena is a copious note taker.

Ken Shelton 46:49 We could use AI to take notes for us here.

Rena Clark 46:52 I know. But see, it's different. I'm processing it differently with my own little notebook here. All right, so it's that time to ask, "What's in your toolkit," all kinds of different things.

Transition Music with Rena's Children 47:04 Check it out. Check it out. Check it out. Check it out. What's in the toolkit, or what's in your toolkit. Check it out.

Rena Clark 47:13 So Winston, what's in your toolkit today?

Winston Benjamin 47:16 I'm going to do a cheat code. I'm going to ask you to listen to this episode and then go back and listen to some of our past episodes about AI, because with this, you will listen and think more intentionally in your implementation to better serve your students, right? Some of the examples that we're getting here really help think through the questions that you should ask. So if you're looking to start in one of the stages, start there, but also with these questions. I think going back and checking out some other episodes where we help our teachers develop some new ways of implementation.

Rena Clark 47:57 How about you, Paul?

Paul Beckermann 47:58 I'm going to kind of go back to my response to that first quote. I think we need to stay as informed as possible, because we don't want things happening to us in the background that we don't understand. If we understand how the systems work, how the systems are gathering information, where the systems are gathering information from, how it's programmed to do some—algorithmization, is that a word?—to spit out the output, we'll have a better degree of success using that well with our students in a way that's fair to our students.

There's a lot of great courses out there. If you don't know anything about AI, take one of the beginning courses, at least, and get something. Code.org has a nice series of 30-minute videos. They're very accessible, super easy to understand. You don't have to be a tech wizard to look at those. There's all kinds of good ones out there. You can Google it. You'll find a whole bunch. There's one from MIT and Google. They paired together to do one that's really good. The Google one is great.

Rena Clark 49:01 Yeah, a lot of teachers recently take that one, and we have lots of positive feedback with that one, yeah.

Paul Beckermann 49:08 So learn, learn, wherever you're at, and then get out there and play around with a chatbot too, just so you have that firsthand personal experience of what it's doing. Rena, what about you?

Rena Clark 49:22 I still like "more context is better outputs." We talk about "trash in, trash out." I like the idea of we're not going to be replaceable, but being informed and then filling in more than the AI so that less bias can possibly enter. I just love this conversation, because constantly we're opening—we talked about doors and windows—and adding more perspective, and then adding that into our inputs can be helpful, and teaching our students to do that as well and be critical consumers and thinkers. It's not just for us to do as educators, but I think it's important. I know I'm working with this right now: How are we teaching this to our students who are using this and then going into college or workforce, or wherever they're using this tool? We need to teach them to be critical consumers and users. As Ken alluded to, it's another AI literacy. It's just another way they need to have access to that. Ken, if you want to add something to our toolkit, feel free.

Ken Shelton 50:26 I think, for all of us here, it's to recognize that it is an emerging technology, and it is futile to try to stay on the leading edge, and there's no need, to be honest with you.

The analogy I love to use is going to a grocery store. You don't buy everything in the store. You buy what you need based on the reason why you went in there. In some cases, all you need is a snack, and in other cases, you need ingredients because you're going to make a meal. The approach with AI needs to be the same: I don't need to use it for everything, but when I recognize that it can do something for me that I cannot do myself, then that's when you want to go ahead and say, "Okay, I need to determine which AI system or platform does that best, and therefore I will use it for that purpose."

To try to keep up with everything... The treadmill goes from two to four to six to eight to 12. Eventually what will happen? You can't run as fast, and you fall down. I don't need to. I'm going to step off and I'll let it do its thing, and I'm going to take a step back and take a pragmatic look at what's going on, and then when I'm ready to jump back on the treadmill, I will.

Paul Beckermann 51:55 That's good. All right. Time for that one thing.

Transition Music with Rena's Children 52:00 It's time for that one thing, that one thing,

Paul Beckermann 52:13 Winston, one thing time. What do you got?

Winston Benjamin 52:17 There's a lot of things that Ken dropped in this conversation that I was, "Oh, my." He was, "You got to put it on wax." I was, "Yup, put the information out there, drop gems on them, babies that I think," and figure out how to hold those diamonds and those jewels and move forward. I love that.

The thing that I really wanted to take away was, don't mistake a remix for a transformation, right? Just because you asked Great Gatsby in AI doesn't mean that you actually helped develop students thinking about The Great Gatsby. So again, just making the questions different instead of what they're actually thinking about as being different. As a kid who loves Hip Hop remixes, yeah, but also it's still the same track. I really appreciated that part of the conversation.

Paul Beckermann 53:19 All right. Rena, one thing, what do you got?

Rena Clark 53:23 Well, I'm going to remix something. I love this idea: just asking, "Are you appropriately resourced to reach your full potential of implementing AI?" So how are you resourced? Where are you getting your information? How are you trying it on? How are you thinking about it? I just love the conversations you're having. So how are you appropriately resourcing yourself?

Paul Beckermann 53:47 I was stuck on a whole bunch of different things that Ken was saying. Rena, you were writing them down. I wrote down just a couple of cliff notes: that tech should be foundational, not a luxury item. I think that is now going to relate to AI. If we withhold AI from some kids, we are really putting them at a disadvantage, and that's not right. We need to make sure that AI becomes a foundational piece, with guardrails and with the proper learning about it, so we have the perspective and the context. Still, this is the world that our students are entering into, and we need to make sure that they're ready for that. So foundational, really, that's that media literacy piece that they need now with AI. Ken, this is a final opportunity for you. What final thought do you want to leave our listeners with today?

Ken Shelton 54:39 To augment what you all shared, I loved the word you used. It is to recognize that the ways in which we define literacy need to adapt and evolve.

A real quick side note story: I used to always say to my students, when I grew up, literacy was only considered reading and writing, but to my students, and especially relevant now, I would say, literacy occurs in five dimensions: reading, writing, speaking, listening, and observing. We want to curate and cultivate a skill set that's across all five of those areas, and AI is included in that context. Think about media, information, AI, all of the above.

For you all, and for the listeners, it is to really start to look at your approach of broadening what does literacy mean, and what does it look like, especially in incorporating those five core areas.

Winston Benjamin 55:42 You have provided us, especially me, with a couple of things. I'm still pondering that under-achieving, over-achieving, over-performing thing. I'm still pondering how do I flip that to make it beneficial for my students, instead of creating a space where they are forever marginalized.

Additionally, thank you so much for your time and helping us think through the promises and the perils of AI in education. If you have a chance, check the book out. It seems that, as we've discussed, this has been really valuable to us, so check the book out. Ken, thank you so much for spending some time with us and helping our audience think about how to be better for their students.

Ken Shelton 56:32 Thank you.

Rena Clark 56:35 Thanks for listening to Unpacking Education.

Winston Benjamin 56:38 We invite you to visit us at avidopenaccess.org, where you can discover resources to support student agency, equity, and academic tenacity to create a classroom for future-ready learners.

Paul Beckermann 56:52 We'll be back here next Wednesday for a fresh episode of Unpacking Education.

Rena Clark 56:57 And remember, go forth and be awesome.

Winston Benjamin 57:01 Thank you for all you do.

Paul Beckermann 57:02 You make a difference.