
Unpacking Education & Tech Talk For Teachers
Unpacking Education & Tech Talk For Teachers
CS Education in the Age of AI, with Perry Shank
As artificial intelligence becomes increasingly embedded into daily life, computer science education must evolve to help students navigate and shape this new terrain. At the heart of this transformation is a powerful idea: keeping humans in the loop.
In this episode, Perry Shank, Senior Director of Research and Development at CodeVA, joins the Unpacking Education team to explore how computer science education is redefining itself in an AI-driven world. He shares his personal journey, touches upon the importance of understanding AI—not just as users but as informed creators and collaborators—and offers powerful insights around integrating AI meaningfully and ethically into the classroom.
Perry emphasizes that while AI tools can be powerful collaborators, they should never replace human reasoning, creativity, or ethical judgment. “We have to make sure students are not just asking for an output and running with it,” Perry says. “They need to balance machine reasoning with their own sense of judgment.”
This episode explores how educators can foster that balance by demystifying how AI works, spotlighting human decision-making in data and design, and engaging students in thoughtful use of these tools. From hands-on activities—like Google's Teachable Machine and micro:bit CreateAI—to building custom Generative Pre-trained Transformers (GPTs) for critical conversation, Perry highlights the importance of curiosity, experience, and reflection. AI literacy must be as foundational as math, and it must center the learner as a thinking, ethical, and empowered human in the loop.
Visit AVID Open Access to learn more.
Perry Shank 0:00 Playing around with what these AI tools are and seeing how they work is really important, but we have to make sure that we're not forgetting also that these tools were created by someone, right? And they're designed in a very specific way, and so understanding the way that it's designed is going to help us learn how to use it better.
Winston Benjamin 0:19 The topic for today's podcast is computer science or CS education in the age of AI with Perry Schenk. Unpacking Education is brought to you by avid.org.
Winston Benjamin 0:32 AVID believes in seeing the potential of every student. To learn more about AVID, visit their website at avid.org.
Rena Clark 0:42 Welcome to Unpacking Education, the podcast where we explore current issues and best practices in education.
Rena Clark 0:51 I'm Rena Clark.
Paul Beckermann 0:53 I'm Paul Beckermann.
Winston Benjamin 0:54 And I'm Winston Benjamin. We are educators.
Paul Beckermann 0:58 And we're here to share insights and actionable strategies.
Transition Music with Rena's Children 1:02 Education is our passport to the future.
Winston Benjamin 1:07 Our quote for today is from Code VA's website. It reads: "We prioritize quality computer science education for all students and promote the connections of CS education to economic development". What are y'all thinking, Paul, Rena? How does this quote hit you?
Rena Clark 1:27 It's kind of interesting to think about where CS education is today. So when I first started this quote, I was thinking about how understanding the basics of computer science and maybe computational thinking is equivalent to maybe me understanding, growing up in the 80s and 90s (hold on, just go with me here), understanding kind of like math. Like, I have to have that basic understanding of math, otherwise it's going to really limit my future and my career options.
And I think with our students now, there really needs to be some basic understanding that those basics of computer designs or computational thinking, especially with the world we live in and how it functions and works, and that's really essential for them, as if they want to, you know, really have that opportunity, especially to grow economically.
Paul Beckermann 2:21 And if it is essential, what I'm hanging on is that "for all" piece, right? Computer science education is a career education for every student that we have, because at AVID we talk about opportunity knowledge a lot. If you don't have the opportunity knowledge, that's a door that's not available to you. And we're obliged to make sure that every student has that door in front of them, so that if they choose to, they can walk through that computer science door, and it's an economic opportunity as well as an educational one.
Winston Benjamin 2:52 Man, it sounds like you two are my parents selling me to go to school because they were both immigrants with limited education opportunities. So it was like, "I'm gonna give you the chance knowing that I can make you better". So I'm hearing the value of parental voice in this conversation. I just want to shout that out as we begin, because I know it's about our kids, but it's also validating their parents. We're so excited to welcome Perry Schenk. Perry is the Senior Director of Research and Development at Code VA, who we just heard from. One of the things we enjoy doing here, Perry, is try to ground our guests so that our listeners have an understanding of who they're listening to. Do you mind giving us a little story of yourself, your background, and what got you interested in CS and education?
Perry Shank 3:37 Sure, like a lot of people, it's nice to be here, by the way. Thank you for the invite. Like a lot of people in CS, my pathway to get here has been not a direct route. So I started teaching about 25 years ago as a music teacher, and I was teaching middle school, high school band and orchestra. Then I took a break from that and taught art for a while at a K-8 school in Colorado.
After that, I went back to teaching music, but this time elementary, so I've taught K-12 performing arts, visual arts, and sprinkled in a lot of technology along the way. About 10 years ago, I was experiencing some difficulty singing since I was singing with elementary school kids all day, my voice got fatigued really, really quickly, and so I started talking to my, my district superintendent about other possibilities for me. He said, "You know, you're really good with technology. You do a lot of great stuff in your classroom with tech. I think you'd be a perfect fit for teaching a computer science course".
And so 10 years ago, that was really the impetus for my switch. I went from teaching elementary music to teaching AP computer science course at a high school, which is pretty abrupt. It's a pretty abrupt switch. But luckily, and kind of interestingly, Code VA was the organization that helped train me up and get me to where I needed to be so that I could teach advanced skills in computer science to my kids. And I've always enjoyed being on the edge of things, and I think that that's, that's a real exciting place to be. And so I've, I've always been drawn to CS because there's always new thing to talk about, and there's always new things to learn about.
Rena Clark 5:26 I love that you're sharing your pathway. And isn't there, like, a lot of stories, like in World War II, they took the musicians and they were the code breakers, because they're so good with figuring out patterns. And anyways, so there's definitely some connections here with some of those skills.
Perry Shank 5:42 Yeah, for sure. Every, every time I go to a when I go to a conference or meet some new people, either they're really shocked about that or they're like, "Oh, yeah. I know a lot of musicians who teach CS," and I think that's true. Yeah.
Rena Clark 5:47 So kind of want to dig in a little bit more, so you have experience with this Code VA, and you're also CSTA, which is Computer Science Teacher Teacher Association. And so we just want to get your feel on how you're seeing the landscape of computer science education shifting as more and more AI tools become accessible. And really, is that? How is that changing the future, possibly of CS?
Perry Shank 6:18 Yeah, that's a, that's a great question. So one of the things that I've been doing in my work is to find ways to connect computer science to all, all types of different things, so connecting in an interdisciplinary way, or transdisciplinary way, to things that students are learning in other, other classes, or maybe experiences that they might have outside of the classroom.
And so it's kind of unavoidable these days that that, that would also include teaching about AI. And so one of the things that I would say that is kind of the largest thing in CS education that's shifting because of AI is that you can't really ignore AI these days. It's embedded in our news. It's embedded in our lives. It's embedded in our classrooms and grappling with artificial intelligence tools and having rich discussions about about maybe ethics or the function of a tool and how it's built, like all of those things, are great learning experiences for CS.
I see, and I noticed that there's, there's a lot of CS teachers that are just embracing this future of bringing AI into our classrooms. I think they overall, they want to know about AI a lot more, and there's a lot of great tools out there now and a lot of resources for them to experience that will help deepen their own understanding.
Winston Benjamin 7:48 You shared with us kind of how a lot of teachers are embracing AI. How are national conversations around AI influencing computer science standards and professional development? And I know standards are shifting currently, like they're right? They're in discussions right now to shift some computer science standards.
Perry Shank 8:02 One of the, the discussions that we've had as a CSTA standards writing team is to think about how AI is embedded in, in the work of learning around CS and making those connections where, where it's possible and where it's tangible. Having AI as kind of this separate type of content area that needs to be approached in a silo isn't really the way to move forward and to make sure that what you're learning about connects to the other things that you know.
And so I think it's really interesting how like when a couple of years ago, when generative AI made its way onto the main stage, right, with like ChatGPT and other tools like that, the interest that teachers had in AI really exploded. As an organization, we started hearing from, from teachers that they're really interested in knowing how to teach their kids how to use tools like that.
And it's interesting that that's really the first place that they stepped. They wanted to know, "How do I bring this into my classroom?" So we jumped into it, and we started building out some products about teaching, like, "What's in this black box of AI tools, and how are they formed? How do they work? What are their use cases? What? What are some of the barriers that are presented?"
I feel like a lot of teachers are really interested in learning more about how to engage students in a very impactful way about AI content, beyond just use of AI as a tool, but actually understanding why it gives you the results it gives you. It takes a lot of background knowledge. Knowledge to be able to get to that point. It takes or at least a little bit of background knowledge to kind of dig into it and to understand that like this, is a classic kind of paradigm of data in and data out, right? Whatever you feed it and whatever it's being fed is what shapes its, its output.
So a lot of the different ways that teachers jump in and start to use the tools in the classroom, start with students exploring a little bit, and I think that's important. Playing around with what these AI tools are and seeing how they work is really important. But we have to make sure that we're not forgetting also that these tools were created by someone, right? And they're designed in a very specific way, and so understanding the way that it's designed is going to help us learn how to use it better.
Paul Beckermann 10:48 So you've talked a lot about how AI is impacting kind of the world around us, as well as CS. Let's focus on CS now, in that kind of in that context, what would you say are the new competencies or literacies that we should now consider kind of like the essentials for computer science or learning computer science?
Perry Shank 11:09 Yeah, that's a great question. I'd say first and foremost is understanding how to critically evaluate AI, to figure out its shortfalls, to figure out what it does really, really well, to figure out what the human interaction is within the use of an AI tool. I think we really have to be aware and talk to our students about making sure that they're not just asking for an output and then taking that output and just running with it. They have to think about the balance between their own human reasoning skills and machine reasoning, how they use their own sense of judgment and the way that they make decisions as a human as they use the AI tool.
So I think that this really does highlight the importance of equipping students with the ability to question and critique AI outputs. It's one of the most important skills these days. It's important for our students to know this because they are confronted with AI output every day through social media and through news outlets and even in their classrooms and being able to evaluate and know when they see it, or at least ask questions to determine if what they're looking at is AI or not. I think that's that's probably the most important thing currently.
Rena Clark 12:35 I mean, I think a lot of our adults need the skills, right? Like we all need to work on those. Absolutely.
Paul Beckermann 12:42 I'm kind of curious, do you see the core competencies and literacies different between AI and CS, or do you see them kind of the same?
Perry Shank 12:53 I know that there's a lot of different positions on this. I really see AI as a subtopic within computer science. I know not everybody sees it that way, but there's so much about being able to use the tool that takes a larger view of the knowledge that is presented by CS. So I know that there are a lot of different groups working on AI learning frameworks and all of that, and as a domain, there is a lot of rich content to learn, but I believe it belongs within that, that umbrella of CS. Really, though, I don't know if that necessarily matters, because when we're talking about different topics, whether it's data science or computer science or AI, they're all interconnected. They're interdisciplinary, and we actually need to teach students all three of those things and how they connect together.
Winston Benjamin 13:51 I really appreciate that you're, you're connecting like the tissues, right? Because without the umbrella, you can't break off into your specific areas. And sometimes when you're dealing with individuals who are brand new to AI, like a couple of years ago, anytime anyone brought up AI, I would always be like the Terminator. It's gonna come back to get a year to me, like, all of those random things that we've that, like you said, AI in our world.
So one thing that I'm trying to help understand is because, like, everybody's using it in so many different ways that I don't know which is good, which is clear, which is not clear. Could you discuss some of the promising approaches or tools that you've seen teachers use to like, effectively engage AI in their classroom, not just from a sense of, "Hey, sit and use it," but an actual like, "Here's how you engage with it. Here's how you understand the interconnective tissues," as you were just mentioning? Have you seen any approaches that help make sense of it?
Perry Shank 14:50 Yeah, I think one of the, one of the foremost tools that that I've I've seen teachers use, use and actually enjoy, and a professional development setting, as well as Google's Teachable Machine. It's really a great tool to be able to understand about the curation that needs to happen with data sets. So the tool, Teachable Machine, basically engages with your webcam or your computer's microphone, and the user decides what it is that they want the AI data set to evaluate.
So for example, you could use your webcam to take a picture of a thumbs up and then take another picture of a thumbs down, and you put those in two different data sets, and the more pictures that you have of your thumb with it facing up, and another couple of pictures of your thumb facing down, the more the tool, after training, is able to use your webcam and evaluate and say that's a thumbs up or that's a thumbs down. The tool is really cool because it shows you the confidence level that the training set has in what you're offering it through the webcam.
And so I've seen teachers do amazing things with this tool, and also just have this light bulb flash above their head of like, "I get why this is tricky. I get why we need to talk about bias and data sets and about the way that data sets are curated in a classroom," so that students know that this, this is actually a very intentional thing that that needs to be focused on, because even if it's a thumbs up, if you turn your thumb slightly to the side, it might think that it's a thumbs down. And so you can move from there into talking about AI being used for for some pretty serious things. And then draw those lines for teachers and draw those lines for students where they can say that like we have to think very critically about our use of AI tools for some of these important matters that are in society.
Rena Clark 16:57 Something similar, I was introduced actually, at CSTA was the Microbit Create AI. So you hold a Microbit, a little tiny computer, but then it's a little tiny machine, and then you were able to, like, create movement. So you hold it through the sensors, and you train it so that then through it's trained through your movements, and then it will do whatever you want through the movement. So it was cool, like, you can make a magic wand, you could do. So it really was like a magic wand. So when I do that movement, then it's going to react in a certain way. But it really helped me, actually, a personal level, understand AI better just that physical connection and the data training. So that was kind of a fun thing for me. So something similar, I think, to that, like Teachable Machine, but more of a physical computing component. So that was kind of fun.
Perry Shank 17:45 Yeah, I'm a big, big fan of Microbit, a big fan of Microbit. And just thinking about all of the different sensors that are on board that, that little device, right, and how you might use that as data input for your training structure. It's, it's, it's really excellent. I've played around with that tool.
Rena Clark 18:04 It's pretty impressive. Yeah, yeah. It's very if you want to bring it to life, like, have some tangible I need, I'm very kinesthetic. I need to make that physical connection. So that was really helpful for me. And, you know, thinking about our students and AI, lots of times we think, like, on our own, we're just playing around typing things in, but I'm thinking about, like, how can we think about preparing students to engage with AI collaboratively, like being collaborative and also making sure, kind of two parts, being collaborative, but also being ethical in that, like, problem solving process?
Perry Shank 18:43 Are you? Are you thinking of collaboration in the sense that students together are using AI, or that AI, AI student is working with AI? Is it a human, human and machine collaboration?
Rena Clark 18:57 Both? I mean, you could think about both. Yeah, that's a good wondering.
Winston Benjamin 19:02 See, that's a place we don't usually go, yeah, because, you know, we're human to human, mostly in our thing, in my thinking, right? The classroom, collaboration. But that's a really good difference, differentiation of who's collaborating and what's collaborating. Thank you for bringing that up.
Perry Shank 19:20 Yeah, I, you know, I can speak to my own, my own use of AI. And what I've found, as far as, let's start first with the, the human collaborating with the machine, right? So I've, I've created a few GPTs using ChatGPT that acts as a critical friend for me in discussions, usually when there's a topic that I'm exploring, when there's something that I want to learn about, when I'm trying to either re-conceptualize or create new learning, it's often really a strong approach to have discussions about it and to talk to people about it and.
Sometimes those people aren't around, right, or sometimes I might be embarrassed by what I'm trying to learn. So I'm like, "I don't want to talk to anybody about this". But you know, conversation is such a great learning tool that staging a conversation within certain guardrails is a great way to push your own understanding. Of course, there's a lot of need to ensure that the information that you're getting in that conversation is correct.
One of the things I'm a really big fan of, I think it's called, like a RAG setup. I forget what the actual letters stand for. It's an acronym, but the main idea is that you are curating the data that it's pulling from. So let's say that I wanted to have a discussion about a well known book, like a classical book of some sort. I might find that in an archive somewhere. Bring that in. I might get some information from someone's blog. I might bring in an audio from a podcast where they're discussing it, or transcripts from that. I might bring in, like a college level course syllabus that might have some information in it.
But what I'm doing is I'm taking all of these different things that I'm putting them inside of the model, and I'm, I'm saying, "If for our conversation, only pull from this data and let's talk". And so I can, I can ask questions about, like, "You know, I'm thinking that this character kind of symbolizes this, like, what are your thoughts on that?" And then the way that I've trained that GPT in the past is that I have it asked me questions that makes me kind of defend my stance, just like a good friend would, right, when in a conversation. It doesn't mean that. I mean, my friends aren't always right when we're having discussions like that. It's, it's about the questions, and it's about kind of moving around the ter, terrain of learning. And that's what can happen in a collaborative discussion with AI.
Rena Clark 21:59 Yeah. And now it seems like everybody, you know, we have ChatGPT, Google Gems, Microsoft Agents, you have Notebook LM, like all these other. It seems like there you have that opportunity in a lot of different platforms. Now.
Paul Beckermann 22:14 In that context, Rena, there are so many things, right? So what advice, Perry, would you give to an educator is feeling overwhelmed by it all? There's just so much, and it's changing so fast.
Perry Shank 22:26 I think the very first thing if a teacher is feeling overwhelmed, and that happens very, very easily. There are times when, actually, most times, I feel overwhelmed by the amount of new stuff that happens within this field. One of the things is to make sure that you kind of set up your own set of resources that you go to to find new information out, trusted resources. Maybe your Instagram channel or your Instagram feed, or whatever it's called, isn't the best place to get the correct information, finding a people within the community, maybe that are asking questions and thinking about kind of the ethical considerations and bias discussions and things like that, and thinking very critically about AI, but then also, not entirely, someone who's pessimistic, also someone who's thinking of like, what's the future of AI and what are some of the possibilities that that could kind of excite us?
So I'd say the one of the things is to just make sure that you, you have resources at your fingertips, or a community to belong to where this, these discussions are happening. A second thing that I would say is that like you should be using some type of tool and just dipping your toes into that and figuring it out, because there's nothing that teaches you like experience. And if you're just watching someone, that's a, that's a vicarious experience, but it's not going to be the best thing. You're going to come up with your own questions. You're going to, you're going to learn your own things, your own use cases, by playing around and kind of sandboxing AI within a structured tool paradigm.
So, and one of the great things about that is that you, you get to level what that experience is. So if you, you feel like you can move quickly and deeply, like both of those things can kind of work together to really push you forward. But if you're, if you're wanting to move a bit more slowly, you can do that too. One of the best perspectives to have, I think, with engaging with AI tools is thinking of them as a puzzle that you're trying to figure out.
Whenever a person comes to me and says, "Will you give me an evaluation of our tool that I've created, and can you like test it out and see if you think that it has value in a classroom?" I try to break it, right? I try to do things, I mean, not actually break it, but I try to, I try to push it beyond what it's designed to do, right? Because, you know, kids are going to do that, right? So in playing around with it and testing its limits, and testing what it can do and what its limits. Limitations are, is the best way to build familiarity with AI, and it's also fun, right? Take away the, the idea that failure is going to limit your experience, because we learn a lot from failure.
Winston Benjamin 25:14 Growth mindset. So going back to a few years ago, when AI popped up, it felt like a free for all, the wild, wild west, everything, something new was coming up. Districts are trying to figure it out. Districts are trying to figure out how to make sure students aren't using it to cheat. There's like, a lot of integrity issues that's going on, right? Quote, unquote, "cheat". What does that mean? If you're engaging with a thinking tool, right? Now, you see district the pendulum swinging in the opposite direction now, where people are trying to have more constraints and control over what that engagement looks like.
How would you direct districts in creating, supporting, full, meaningful AI integration, meaningful AI integration without compromising the core of Computer Science Foundations? How can districts find that balance to where they're allowing exploration, but within those parameters? As you continue to keep reminding us that parameters are valuable?
Perry Shank 26:21 That is not an easy question.
Winston Benjamin 26:24 I, I know that's what I'm trying to.
Perry Shank 26:27 A good one to talk about, though. Oh, man, let me see. I think that one of the best things that a school district and a school can do is actually train their staff, or send their staff to a training where they can learn new things and then bring it back to the school. And it doesn't matter what the topic is. Like, having members within the school get training that they can then bring back into the school is, is the best way for for them to learn, right? Best way for teachers to learn is from their peers. And I think that prioritizing building their own human capital is the best way to do it because they know their context. They know what their, what their state prioritizes. They know what kind of pitfalls and snares might be in the way of, of a smooth pathway. They, they know what the context of their school is. They know what the priorities of the school are, and they, they know the other teachers.
And so they can take learning from an organization or a training workshop or something like that, and bring it into the school district. And then there are people on site who have an increased capacity and a knowledge that they can apply to help that district move forward. Unfortunately, it's not as quick as flipping on a light switch, right? Like that. Teacher training takes time and it takes money. It takes finding the right people to partner with then also deciding, like, what is it that's kind of like the utopian outcome, like, if we would imagine a school in the future where AI is embedded, what does that look like? Maybe some schools choose to think about that as being that AI is not really involved in a lot of places. Maybe in some ways, AI tools support interpreting data, or, like, analyzing data and working kind of side by side with a group of teachers to decide what, what kind of changes to make the, the learning more, more impactful for students.
Rena Clark 28:34 All right, so we've gotten kind of deep, so I'm just curious, kind of a fun wrap the conversation, but I'm just curious for you, like, what are you most excited about with AI right now in education?
Perry Shank 28:50 There are so many things. I'm really interested in exploring what it means to be a human in the loop, and to devise a process for that within an AI tooling structure. I think the first time that I heard about human in the loop, I was already doing it. And I think a lot of people who were kind of embedded in the earlier uses of GPT creation realized that like, you take the output and you revise your prompt, or you change the way that you supported your question with data and you ask for something new, right? Once again, using your human reasoning to like to get a different out output.
So I'm really excited in thinking about how learners, both teachers, educators and students, can approach the use of AI with this human in the loop perspective, so that we don't lose the human qualities that make learning exciting, right, and that make working with each other beneficial. I know I mentioned earlier, partnering. With an with an AI for discussion. But if I actually had someone when I was having a thought, right? If and I wanted a discussion, if I had somebody sitting right beside me, and I could choose to either talk to a person or talk to my AI, I would talk to a person, and that would be an exciting conversation to have. And I think, yeah, I think that's my answer.
Rena Clark 30:20 Okay, well, I think I can make a good T-shirt. Y'all like a "human in the loop" with it, like this could be our T-shirt for this episode. There could be a.
Paul Beckermann 30:29 Nice logo with that, yeah. Maybe.
Rena Clark 30:31 An AI created logo.
Paul Beckermann 30:36 And we could put that AI created logo on our toolkit.
Transition Music with Rena's Children 30:40 Check it out. Check it out. Check it out. What's in the toolkit? Check it out.
Paul Beckermann 30:51 All right [30:51]. Time for the toolkit, everyone [30:51]. All right, Rena, what's in your toolkit today [30:51]?
Rena Clark 30:58 There's so many different tools. But I was just thinking, if we're trying to learn more about AI in general, there's the CSTA resource library. You can go in there. We talked about, there's a lot of stuff from Common Sense Media around kind of AI, if you're just kind of starting out. So some good places just to check out, if you're just wanting to know.
Paul Beckermann 31:21 More. Winston, what's your toolbox looking like today?
Winston Benjamin 31:26 So I got a specific thing that I got to look up, what the acronym stands for, but a RAG set up for AI. I think that's a great idea where you bring in the actual parameters, because sometimes it's like, "I don't care where you're getting this that that's great, but like, I need to engage with this specific thing". Like, if I was doing Moby Dick, I'd do a Cliff Notes. But how could I utilize AI to help me in this, in this process, by limiting the amount of information it uses. I think that's a really good way to help kids make decisions on what is valuable information, like, how do they determine what that is? So I love that idea.
Paul Beckermann 32:06 You know what, Winston, my tool is going to lay right next to yours in the toolbox, because mine's Notebook LM, which is a great tool for doing just that. Because in Notebook LM, you upload whatever documents, whatever information, websites, links to YouTube, whatever that you want it to draw from, and it restricts its responses pretty much to those resources. So it does exactly what you're talking about. And it's very accessible, very easy for students to use, and teachers too. Notebook, LM.
Rena Clark 32:35 And it makes the podcast, which is great for us audio learners. It does. And they have a beta version. So when you're driving, you can talk to it, and it will talk back to you. That's, and you know.
Paul Beckermann 32:47 What? Next week it'll be something else. But next week keeps changing. All right, Perry, you get to add to our toolkit as well. Is there anything we haven't talked about today, or you haven't mentioned that you'd like to drop in our listeners toolkit?
Perry Shank 33:01 Oh, wow, so am I allowed to shamelessly plug any of Code VA's products?
Rena Clark 33:08 We were hoping.
Paul Beckermann 33:10 You would be disappointed if you didn't. Yeah.
Perry Shank 33:12 All right, well, I'm gonna go for it. Then we've, we've got, we have two things that I'm really, really excited about. One is, is called Megabytes. It's a student magazine that has a feature in it that is called "The Artificial Imposter". And in every issue, the artificial imposter tries to trick the viewer into thinking that something's real that's not. So in our most recent issue, there were four pieces of artwork. They all looked the same. They were all like horticulture themed, but one of them was created by AI, by DALL-E, and so the job of the viewer is to look at it and say, "I'm thinking that this one is not real because of this," and it just is. It's great for discussion and for sharing with your family. So that's in the Megabytes magazine.
The second thing that is in my toolkit is we are developing currently, and it's going to be launching here in the fall, is it's called AI Pathways, and it's a set of curriculum that can be used inside of a like high school programming course that actually talks about designing AI structures. So actually going beyond just using AI, but designing, designing AI and learning deeply about AI through through that type of exploration.
Paul Beckermann 34:40 Those are great tools, and those are you said at your Code VA, website. What's the web address for that?
Perry Shank 34:46 That's right, it's, it's Code Virginia. Code Virginia, spelled out, dot org, so codevirginia.org, and.
Paul Beckermann 34:56 If they were, if someone were looking for those resources that you just mentioned. Have any advice on how to find them on the website?
Perry Shank 35:03 Yeah, so you can find on our website the curriculum library, under the Education tab, where you can find all kinds of interesting things that we serve up as learning experiences for teachers and students. The magazine can be found under the publications. So if you are, if you're looking under Innovation, and click on publication, you'll be able to get to the Megabyte student magazine. And we just started that up this year. We are getting ready to release our second issue, the summer issue, and we're starting on our third one.
Paul Beckermann 35:38 Awesome.
Rena Clark 35:40 Sorry. I got distracted. I was looking at stuff.
Paul Beckermann 35:43 I know I was looking it up too. I was.
Rena Clark 35:47 Like, oh, seven micro bit and seventh grade math. I might have to do some of those lessons. Sorry, I got very excited.
Paul Beckermann 35:56 Have another subscriber, Perry.
Rena Clark 36:01 All right, so we're gonna, we're gonna jump into our "one thing" time for that one thing.
Rena Clark 36:15 That one thing. Lots of things to take away from this conversation. But what's maybe that one thing you're still thinking about or still resonating with you? Who'd like to start us off? All right?
Paul Beckermann 36:26 I'm gonna go with "human in the loop," and I hope we can come up with a logo for that and a T.
Rena Clark 36:30 Shirt. I know I want. I want a shirt.
Paul Beckermann 36:33 It's key, though, right? I mean the humanity of the whole thing. We need to hold on to that and, and find the most productive, best place for us in that loop. So, yep, I love that.
Winston Benjamin 36:44 There's a Star Trek kid. I'm just like, yo. We just need everybody to start thinking like Data. Find that human balance, to team the human and Android. Yo. Data, I'm sorry, is he the guy with the glasses? No, that's Geordi La Forge. Data is the.
Rena Clark 37:01 Paul? You're killing me.
Winston Benjamin 37:02 I was not gonna stab you, Paul.
Rena Clark 37:06 I think.
Paul Beckermann 37:08 I'm the Star Trek human out of the loop. No, okay, oh, now I got it. I remember now, see, I was the original Star Trek.
Winston Benjamin 37:20 Right? Spock in the house. Um, but for me, no, no. But I think for me, the thing that's really valuable is the idea that there is an input, right, that somebody chose information and made something with that information and to be critical to understand, like, what did they leave out? What did they put in? Why did they leave this out? Whose voice is missing? Because I think kids right now in the world that they're growing up in, they're so critical of what information they're getting, so just helping them figure out how to really interact with that AI thing, so that they know that there's somebody who designed that model. Like, what does it mean? Maybe you can become the person to design a model that kind of fix some of the issues that's been going on, or what you see that's missing. So I think it gives students a kind of bit of power when they realize that it's another person who did this, instead of this overarching thing that they can't think of, through or relate to. So I think gives a little bit of a more back to what you were saying, Paul, the humanity in the loop, right? That there is a human, and you can be the human to interact with that loop. So that's really where I'm thinking about and what I'm taking away from our conversation.
Rena Clark 38:35 And I think that directly connects to my one thing, which is "nothing teaches you like experience". So to you need to actually experience that by trying it on and doing it yourself. And how about you? Perry? One last thing.
Perry Shank 38:52 Yeah, I'm gonna go back to something that was mentioned really early on, which is that what we're talking about is literacy for all students and teachers. So thinking about how we might make, make structures and learning opportunities available for all students around developing AI literacy in, in some way, as like a first step, and remembering in that first step that it's not just about using tools any more than it is about using an iPad or using a computer, right? We're talking about engaging with AI where we're deepening students ways of thinking critically about the world, around designing and ethical ways, and really staying grounded in those human values.
Winston Benjamin 39:41 I appreciate that. So, so one of the things that we want to say to you all, everyone that's listening, continue to push forward. I know that our world sometimes feels like it's going faster than what we are able to do, but our parents started with Encyclopedia Britannica, is we went to an online computer stuff. We got Google, and now Google's a little. Faster technology continues to grow, and all it is is a tool. So as we continue just realizing that we don't need to remove the human from the AI, maybe we can mix it together and create our own Data, Geordi La Forge, Star Trek, reference, let's go. So thank you all for listening to us today, we really appreciate you.
Rena Clark 40:23 Thanks for listening to Unpacking Education.
Winston Benjamin 40:26 We invite you to visit us at avidopenaccess.org where you can discover resources to support student agency and academic tenacity to create a classroom for future-ready learners.
Paul Beckermann 40:39 We'll be back here next Wednesday for a fresh episode of Unpacking Education.
Rena Clark 40:44 And remember, "Go forth and be awesome".
Winston Benjamin 40:48 Thank you for all you do.
Paul Beckermann 40:50 You make a difference.