Unpacking Education & Tech Talk For Teachers

EDSAFE AI, with Dr. Jordan Mroziak

June 12, 2024 AVID Open Access Season 3 Episode 192
EDSAFE AI, with Dr. Jordan Mroziak
Unpacking Education & Tech Talk For Teachers
More Info
Unpacking Education & Tech Talk For Teachers
EDSAFE AI, with Dr. Jordan Mroziak
Jun 12, 2024 Season 3 Episode 192
AVID Open Access

In this episode, Dr. Jordan Mroziak, the AI and Education Project Director at EDSAFE AI and InnovateEDU, joins us to discuss the issues and processes related to safely adopting an AI policy in your school. We discuss relevant topics, including EDSAFE's SAFE Framework, the EDSAFE Resource Library, and the need to have conversations and ask questions relevant to your unique circumstances. Visit AVID Open Access to learn more.


Show Notes Transcript

In this episode, Dr. Jordan Mroziak, the AI and Education Project Director at EDSAFE AI and InnovateEDU, joins us to discuss the issues and processes related to safely adopting an AI policy in your school. We discuss relevant topics, including EDSAFE's SAFE Framework, the EDSAFE Resource Library, and the need to have conversations and ask questions relevant to your unique circumstances. Visit AVID Open Access to learn more.


#296 — EDSAFE AI, with Dr. Jordan Mroziak

33 min
AVID Open Access

Keywords

ai, framework, conversation, teachers, students, work, resources, questions, jordan, education, educators, space, move, unpacking, precondition, words, bit, podcast, systems, policy

Speakers

Dr. (66%), Winston (12%), Rena (11%), Paul (10%), Transition (1%)


Dr. Jordan Mroziak  0:00  

Every community is different. There is no one-size-fits-all AI policy model that magically makes all problems go away and solves for everything, right? And I think creating spaces and time for this thing are key.


Rena Clark  0:14  

The topic for today's podcast is EDSAFE AI, with Dr. Jordan Mroziak. Unpacking Education is brought to you by avid.org. AVID believes we need to accelerate, not remediate. To learn more about AVID, visit their website at avid.org. Welcome to Unpacking Education, the podcast where we explore current issues and best practices in education. I'm Rena Clark. 


Paul Beckermann  0:44  

I'm Paul Beckermann.


Winston Benjamin  0:45  

And I'm Winston Benjamin. We are educators. 


Paul Beckermann  0:49  

And we're here to share insights and actionable strategies.


Transition Music  0:53  

Education is our passport to the future. 


Rena Clark  0:59  

Today's quote is from "What is the EDSAFE AI SAFE Framework" by EDSAFE AI. "AI can be leveraged to allow teachers more time for direct instruction with their learners using technology as a co-pilot that supports teachers in doing the human, emotional, affective work that is at the heart of the learning process." All right, Paul.


Paul Beckermann  1:26  

There's like a lot of ways I could go here, but I feel like what's resonating lately is just the number of teachers that I've talked to that are feeling burnt out. It's like I know, it's the end of the school year, and that's especially hard, but their comments seem like more than that. It's like not only is teaching a hard job, but they're feeling the added weight of all these little extra things that have been added onto their plates over the years, and it's sort of a cumulative effect. When it happens, it seems really small, but then, a small one here and a small one there, and pretty soon it feels like a additional big boulder that's on there. So what I hear in this quote is hope. That there's maybe an out for teachers. Like maybe AI can be that partner to them as they're planning, as they're doing some of their work. And maybe they can offload a little bit of that administrative work that allows him to get back to the kids. And if that can happen, that is a huge thing.


Rena Clark  2:19  

Okay, it made me think of like the jailbreak, when you put a little dirt in your pants and like shimmy it out.


Winston Benjamin  2:37  

For me, the thing that I'm thinking about is the leverage, is the word leverage, because, like you, Paul, I recognize everybody's doing a lot. But then I think about all the old school cartoons when I used to watch them and learn about the Roman Empire and how they actually use levers to make things move and shift and make the work easier on them. Yes, there's a lot of things teachers have to do right away, right now. But I think if we figure out how to use the right tools, we'll be able to do a little bit less heavy lifting and more useful things. Like my man, Scrooge McDuck said, work smarter, not harder. And I always bring that up, ladies and gentlemen. If you haven't figured it out by now, that's an important lesson.


Rena Clark  3:26  

I love that. Well, I am excited for our guest today. So once again, I'd like to welcome Dr. Jordan Mroziak, AI and Education Project Director with EDSAFE AI and InnovateEDU. So hi! Welcome, Jordan.


Dr. Jordan Mroziak  3:44  

Hello, thank you for having me.


Rena Clark  3:47  

Can you just introduce yourself to our listeners and let them know a little bit about you?


Dr. Jordan Mroziak  3:53  

Yeah, of course. So again, pleasure to be with you. Dr. Jordan Mroziak. I was just recently, as of probably about six months ago, I had the great fortune of joining Team at InnovateEDU, where I'm the project director for the EDSAFE AI Alliance. I had been doing some work in AI and ethics. In my previous role, I was at Carnegie Mellon University, and had the opportunity to design think about what learnings might look like through the lens of AI if we included it in the curriculum or otherwise tried to leverage it as a teaching and learning tool. So in the work at EDSAFE, very much interested in how we think deeply around the ethics, centering the human role in teaching and learning, while acknowledging that we have this incredible asset at our disposal, around artificial intelligence and so very much using that SAFE Framework: Safety, Accountability, Fairness, and then Efficacy. I always add the equity piece to it, because that's something that's like really significant to the work that I've been doing throughout most of my career. But that acronym is meant to be this sort of linear process, right? So as we think about that, we cannot get to a place where we're thinking about how well it will work, if we are not first prioritizing student safety. That's data privacy, right? All of those things that come along with that. So, how do we situate those values as preconditions before we even begin to start thinking about, does this tool work? The work is incredibly interesting, and I find it's incredibly timely at this moment, where we're thinking about the revolutionary possibilities of this technology.


Rena Clark  5:51  

Yeah, I appreciate the timeliness. And for our listeners, what exactly is EDSAFE AI?


Dr. Jordan Mroziak  5:57  

Yeah, so EDSAFE is this sort of robust consortium of individuals that have come together. Incredibly diverse backgrounds, but it's the framework itself is emergent from a meta-analysis of global practices and policies in AI implementations, right? Knowing that many other countries are already doing this work and have been doing this work and so how do we learn from some of the best practices in AI policy? And so the EDSAFE Framework is emergent out of that, but EDSAFE itself is a consortium of about 26 or so organizations as part of a steering committee that very much tried to make this work tangible to their own unique constituents. So, you know, that could be the National Parents Union. That might be ISTE, CoSN, Digital Promise. I mean, we have a lot of diverse voices at the table with the knowledge that each one of them brings a very specific set of skills and expertise into the room. And knowing that AI, as a tool, is incredibly pervasive, so it's not simply about how teachers will use it, although that is a large part of it. Right? But knowing that it will touch families and caregivers and students and administrators. And so creating this broad cross section of voices in this space to elevate their unique perspectives, but also, again, make sure that we're using this tool and this framework in a way that speaks to their own unique constituencies.


Paul Beckermann  7:39  

You've mentioned the framework a couple times, and I know that EDSAFE AI developed what I believe is called the SAFE benchmark framework. Can you give a little bit of an overview of that a little bit more about it, and maybe why it was developed? Why did you choose to go in this direction?


Dr. Jordan Mroziak  7:56  

Yeah, I mean, I think that the the Benchmark Framework that we have, which is, you know, at a sort of zoomed out level, or at that meta-level, right around that acronym framing, again, is much more than that, right? So I think that there's a depth to it that we want to acknowledge that it's these ways in which it aligns to safe design principles from SIIA and other organizations, right? That it's very much compatible with sort of an existing terrain of recommendations around how AI is designed and implemented. But that Benchmark Framework is meant to set as preconditions not only the words, but I am nerdy in many ways, which I think is a good thing. I use that term very affectionately. But one of them is when I get down to definitions, right? And so it's not simply saying "safety." It's not simply saying "accountability." But really, it's trying to provide people the tools to have conversations to make sure that when they say those words, they're meaning the same thing. When we say "accountability," we are sharing a definition around what is accountability? Who is accountable? Under what conditions? When we talk about fairness, we talk about things like making sure that we are mitigating for bias to the best of our possibility, right? And so, those benchmarks are meant to be both words, but also the spaces in which we acknowledge that we need to have these generative conversations, and we need to make sure that when you and I say collaboration, we're not just sharing a doc and checking a mark. Right? But we are actually engaged in some type of mutual reciprocal relationship around collaboration. And I think that is very true of this Benchmark Framework, as well.


Paul Beckermann  9:48  

I think that's really great that you're using it as a jumping off point for those conversations. It's kind of like when I used to teach creative writing. Every word has connotations and those connotations are different to every person, and unless you're kind of on the same page with that, your definition is going to mean you know, a hundred different things to a hundred different people. So I think that's really wise to go that way. 


Winston Benjamin  10:09  

And speaking about definitions meaning 100,000 things for 100,000 different people, I just want to jump in a little bit deeper and try to get people to understand the elements of the framework just a little bit more. And as you mentioned, you brilliantly discuss safety and accountability as the first two parts of the framework. What recommendations do you have for people to a) first start about understanding what the meanings are, and then, how do they enact these discourses? 


Dr. Jordan Mroziak  10:39  

Yeah, so, you know, I think that I've been very fortunate, at least in my opinion, to come from a community engagement background. And generally, for me, especially in this work, where we're talking about bringing incredibly diverse populations together, not only demographically. But I mean, like, I think that also goes for appreciating the fact that someone who sits in an admin suite in a school district is working very differently than a second grader, right? Like, these are diverse populations. They have different expectations. They mean different things, right? We need to scaffold for that understanding. That being said, you know, for me, there is an expression that I'm very fond of that is kind of highly regarded: we move at the speed of trust. And I would say that is not something that is deeply common when we talk about Ed Tech, perhaps, writ large. I don't want to cast aspersions on an industry as a whole. But, you know, when we're talking about this, we have an obligation, I think, to move at a slower pace to make sure that, again, are we sharing definitions? Right? But also, are we building redundancies into the system to be accountable to them? Like, when we say safety, when we talk about safety, what does that mean? Is that just data privacy? And if it's data privacy, who has access to it, right? So I think these questions of process demand a bit of a slower approach because I don't think that the way in which we understand a very normal approach to Ed Tech development, or Silicon Valley would move fast and break things, works well when we're talking about the things that we're moving fast around and breaking our systems, and people who are impacted by systems, right? So I'm mindful that that SAFE Framework and those words of working through it mean having very meaningful, slow, deliberate conversations to bring people into a space where we are gaining a shared understanding from, I use the literacy to fluency spectrum, that we can get people up to speed with a sense of literacy around what AI is, because it's more than just your ChatGPT use or your your generative models. And if we can start to break down those words, develop some core competencies, then we can move into a space where we can be a bit more fluent around the impact of these kinds of things. Where does my data go? Who holds my data in school settings? Does the district retain that data? Does the EdTech developer have access to that data? I mean, these are sort of like, I don't want to go too far afield from your question, but I'm mindful that you know these things snowball very quickly, right? And I think that we have a responsibility to at least be accountable to following that as far as we can.


Rena Clark  13:40  

I really appreciate the calibration and deep shared understanding. So, with the Framework, I'm just curious, how might I, as a teacher listening to this, or maybe students use this framework? How could it be beneficial for me as a teacher?


Dr. Jordan Mroziak  13:58  

Yeah, of course. Well, first of all, I would say that on our website, as we're standing up this work over the past half a year, and as we move forward, we've been very fortunate to be able to grow a bit of a library of resources around policy questions. And maybe many of those are currently geared towards things like educators or administrators. But we did just very gratefully pull off the first National AI Literacy Day as part of the EDSAFE work where we teamed up with Common Sense Media, The Tech Interactive, aiEDU, AI for Education. So we had face-to-face events. We also have that website is still live at ailiteracyday.org, where you can go for lesson plans and that type of things. So I want to acknowledge that, you know, we are building out a resource library that hopefully will be of use to people, whether educators or learners. Although that's probably a bad way to put it. Hopefully educators are still learning, right? Educators or students might be a better way to put that. That being said, I think that one of the ways that it can be like very tangibly used immediately, that I think is important, is to be deliberate about asking these kinds of questions. When schools are implementing technologies, that is not simply for me, a top-down model of saying that we're going to do this. But I think educators can be very savvy, and can be in a place to ask these kinds of very deliberate questions around like, who is making this choice? Where's this data going? How are we respecting privacy? Are we adhering to things like COPPA or FERPA with regard to, you know, national policy and federal regulations? I know educators do an incredible amount already. So I don't think we want to load more onto them. But I do think that we are entering a terrain where, for better or worse, the more fluent educators might be in some of these technologies, not just making them work in a classroom, right? But also some of the things behind the curtain. I think systems and education systems will be better off with that learning and that knowledge.


Paul Beckermann  16:26  

I'm kind of curious if you've had any districts engage in this framework that you've kind of had a back-and-forth with, and if so, what kinds of things you're hearing from the end user?


Dr. Jordan Mroziak  16:38  

Yeah, so we currently have just over a handful with a couple more in the pipeline of policy labs that we've been working with across the country. And so, they've been doing incredible work. Actually, just today, I believe it was Santa Ana just announced a publicly available resource that they've been developing. But Santa Ana, El Segundo, Gwinnett County inside of Atlanta, New York City public schools, Canyon City, which is a small rural district in Colorado. But we've been working with all these districts on what is their problem of practice with regard to AI. And I think it's important to note that they're all different, as well they should be. Like every community is different. There is no one-size-fits-all AI policy model that magically makes all problems go away and solves for everything, right? And so each one of these areas, you know, sometimes it's been about parent and caregiver education. Sometimes it's been, in the Santa Ana one was about what does a new graduate profile look like for students entering into a workplace with artificial intelligence so readily available? For New York, I think that it's been a bit multipronged, but definitely an emphasis on pedagogy and teacher input, but being mindful that, again, families have an input, students have an input, admin. So, they've been very receptive to it. And I think that everyone wants help in this moment, right? I don't think anyone is saying like, no, keep your resources to yourself, right? We are entering, for better or worse, and I think with all the implications of it, a wild West territory, where this thing is rapidly evolving. No one has a grasp on it. It is a bit lawless. And it comes with all the biases that were present at that time, too, you know. So I'm mindful of the weight of that kind of image in how we are looking at things now, but districts have been really grateful for the opportunity to engage in a conversation with someone, even though we, upfront, are very transparent, and say we do not have all the answers. You know, that it doesn't exist. And I think that anyone who says they're an AI expert at this time is probably a bit of a liar. Not only because it's so fast, but it's so big, right? There's just so much to it, you know. Like my specialty is not in the design of these systems, but I come from a teaching and learning background. So hopefully a bit of a superpower is being able to break down and demystify it, but I couldn't tell you the nuances of how to construct an LLM, right? So I think that the the humility to know that we are all in a space of learning and to know that people sometimes just want to walk alongside of someone has been really impactful for these districts and systems that we've been able to, very fortunately, be in community with.


Winston Benjamin  19:54  

I appreciate that you are speaking in the context of being in community because again, I do think it's valuable that we think about the strongest and the slowest members of our community. And I mean speed-wise because I'm definitely not moving very quickly with this AI stuff. But getting back to my question, I just wonder, how can teachers, school districts, parents utilize this information that you're providing to help them think about ways of helping their students who are accessing special needs services, to be able to participate in an AI world? Because, again, they're going to have to compete, as well. So just something that I've been thinking about recently, since many conversations we've been having on our podcast.


Dr. Jordan Mroziak  20:44  

Yeah, I think one of the things that comes up for me is to say that I think every school has a need to create a space for this conversation. And I think creating spaces and time for this thing are key. Now, that is not an easy ask either, because again, like coming back to what I said earlier, teachers don't have time. They're doing so much already. And so I think that there's an obligation on the part of leadership, on the part of local school boards, on wherever that buck might be stopping at any given time, to make sure that adequate space is given to simply ask questions, right? And that may sound like really soft and very much like evasive, but I don't think we get anywhere good, or I don't think we get anywhere better than where we are now, which is also a huge thing, if we are not mindful that we need time to ask deep, thorough questions of this thing. I mean, the quote that was read at the at the top, for me is really important, because I think it's mindful that teachers need to be in the loop, right? This thing is not a teacher replacement. It cannot do the the core work of being in a classroom with students because one of the core things that teachers do is build relationships. This thing is not going to build relationships, right? It's not going to build trust. It's not going to build those kind of deeply human capacities that make good teachers and make good spaces for teaching and learning. So, I think that whenever I hear people talking about the use of AI, I'm mindful that it can be an efficiency, and I hope that that is true, but I think we need to bear in mind the kinds of efficiencies that we are utilizing it for. Like I would never want an AI to create an IEP for a student. Right? I don't want that to happen. I don't want to input student data into this thing. One, I don't know where it's going. But two, I also want to respect the privacy of that student. Right? And so, I think, again, creating these spaces for us to be a bit more mindful, and ask these questions about the technology is really, for me, a precondition before we go anywhere else.


Winston Benjamin  23:03  

No, that answers the question because, again, sometimes we don't think to ask that question. And it's important to take the moment to think about how it benefits not just our Gen Ed students, but all of our students.


Dr. Jordan Mroziak  23:17  

Everyone, right, and if we want this thing to be radically inclusive in a way that is meaningful, we can't just rush into it. I think that we don't get to a place of equity and justice for everyone, unless we designed purposely to get there. Right? Otherwise, we just happen into some lucky circumstances. And so I think that we need to be incredibly cautious around how this system touches all of our learners, not just those who are, you know, your AP coding students who get to have an AI class, but that it is pervasive and everyone should be at the table.


Rena Clark  24:00  

I appreciate that. And I'm just trying to think through it so hard to find time and space. That's so important, and there's just so much that we don't know right now. It's so difficult. Speaking of not knowing, though, I appreciate the resources you're talking about. So, I'm hoping you can allude to just a little bit more about the resources that are available at EDSAFE. We talked a little bit about how I might use them, and how I might access them. So, for our listeners, like, why might I go on here? How might I access it? What might I use? 


Dr. Jordan Mroziak  24:37  

Yeah, so there's some really lovely policy documentation. There's a library resources linked at the very top of the page. But going in there and seeing things like Cozzen's checklist for generative AI integration, right? Like, they're a fantastic partner but being able to, and again, I'm not trying to say that doing a checklist is the only thing one needs to do. But just as a way of like feeding a conversation for a school, a district, a department inside of a school, like, are these things in place? Are they readily available? And if they're not, where do we find them? And if they're not readily available, and we don't know where to find them, maybe we take a step back. And that step back is a totally respectable answer. Right? Like, I think that's something that we need to acknowledge. It's like, we don't always need to rush forward into this thing, simply to keep up. So yeah, there are these kinds of checklists. There's some great documentation from the OET, from the government, around the most recent release around the three kinds of digital divides that I think is really helpful to think about, which again, are incredibly meaningful to utilizing AI and in actualizing it in a way that is equitable. Of course, we have our own sort of resources, as well, that have been published by EDSAFE. We'll be releasing one on use cases, but just generally, sort of a primer on why this thing is necessary and where it comes from. So it very much runs the gamut, depending on the organization. But again, we're incredibly grateful to those groups that are on the steering committee for contributing their wisdom into it, and making it so readily available. Because again, it very much starts with access to these learnings before we get anywhere, or take a step further,


Winston Benjamin  26:38  

I appreciate you discussing those resources in a way that makes me feel like I'm walking away with a tool. It's time for what's in your toolkit! 


Transition Music  26:50  

Check it out, check it out. Check it out. Check it out. What's in the toolkit? What is in the toolkit? So, what's in the toolkit? Check it out. 


Winston Benjamin  27:01  

It's the next segment in our podcast. So I'm gonna pass it to Rena. What's in your toolkit?


Rena Clark  27:06  

Well, several of these different organizations mentioned in the consortium we happened to have as podcast guests, as well, on Unpacking Education. And some of the topics actually have been around this, so check out some of our other podcasts, and you can you hear from them, as well.


Winston Benjamin  27:26  

Paul.


Paul Beckermann  27:27  

I'm going to just give another plug out to the AI Resource Library that you mentioned, Jordan. I popped around in there a little bit on your website, and there's some fantastic stuff in there. You know, everything from how school districts are integrating generative AI into their policies to K-12 generative AI readiness checklists. There's all kinds of good stuff in there. So, you know, if you're looking for resources to drive your own conversations, take a hop in there and maybe save yourself some looking around.


Winston Benjamin  27:54  

And I'm going to throw in, like Rena, a couple of podcasts ago, we had a conversation about assistive technology. And one thing that I really want us to do is check out the Office of Special Education Programs, Myths and Facts Around Assistive Technology. Because again, as we're thinking through how to utilize AI, how do we make sure that all of our students are considered and valued in this conversation? Jordan, would you like to throw something into the toolkit?


Dr. Jordan Mroziak  28:27  

I will throw something into the toolkit. So, I'll precondition by saying like, I'm not the most fun at parties, because I'm like the AI guy with like the wet blanket, right? I'm like, everyone slow down. It also means that generally, like, by disposition, I'm prone to the pedagogy and the philosophy of things. But if you are not familiar with there's some fantastic work. Adrian Marie Brown and her work in emergent strategy is really beautiful and kind of like fostering some of dispositions around having these conversations. It comes from a bit of the background that I had in community engagement, but definitely would highly recommend some readings on emergent strategy and holding space. Knowing that these are complex conversations, we are not always going to agree. We don't need to, but we do need to gain a level of comfort knowing that we have to ride in this car together. And we can't always get out, right because we're on a journey. So I would highly recommend the work of Adrian Marie Brown and emergent strategy there.


Paul Beckermann  29:36  

All right, it's time for that one thing. 


Transition Music  29:39  

It's time for that one thing. One thing. One thing. Time for that one thing. It's that one thing. 


Paul Beckermann  29:52  

Rena, what do you got?


Rena Clark  29:54  

I feel like a message that has come through very clearly in this conversation is the importance of creating space and time for conversation, and having multiple, different community members and different parties represented at the table, and really digging in deep because this is a complex thing that we haven't really explored before.


Paul Beckermann  30:18  

What do you think, Winston? One thing.


Winston Benjamin  30:22  

It's a statement. Be responsive, not reactive. There's one thing, that's only one. But I think we are moving at the speed of light right now. And I think if we just make a little bit more responsive moves, not reactive, we might be able to include all students in our conversation.


Paul Beckermann  30:47  

Yeah, mine's kind of a combination of both of yours. You know, we need to take time and ask the questions. There's a great video on YouTube of Rubik, the guy who made the Rubik's Cube. There's a quote in there. He says, "Sometimes the most important thing is the question. It's not all about the answers. It's the questions." And it's not always going to be the same answer for every school district. So if we ask the right questions, it can lead us to our correct answer. And I think some of the resources that Jordan has pointed to on the website and some of the things that he's mentioned can help us to get into that direction. But find our own answers, but ask the right questions. Jordan, you want to add a one thing? What what are you thinking about here before we take off today? 


Dr. Jordan Mroziak  31:33  

I think that one observation from this conversation is just, again, not only the importance of remaining curious in this space, but also making sure that we maintain some sense of joy in the conversation, as well, right? Like, this work is not easy. It can be weighty. But I think that it's not my revolution if you can't dance to it, right? Or some, some paraphrasing of that, right? So find a way to keep the joy in sight.


Rena Clark  32:12  

I appreciate that as joy being one of my core values, and I think it's so important. I know Jordan, we really appreciate you being on the show today talking with us, and really having us thinking about those questions that we can ask. So thank you so much.


Dr. Jordan Mroziak  32:30  

Thank you.


Rena Clark  32:33  

Thanks for listening to Unpacking Education.


Winston Benjamin  32:36  

We invite you to visit us at AvidOpenAccess.org, where you can discover resources to support student agency, equity, and academic tenacity to create a classroom for future-ready learners.


Paul Beckermann  32:51  

We'll be back here next Wednesday for a fresh episode of Unpacking Education. 


Rena Clark  32:55  

And remember, go forth and be awesome.


Winston Benjamin  32:59  

Thank you for all you do. 


Paul Beckermann  33:01  

You make a difference.


Transcribed by https://otter.ai