Unpacking Education & Tech Talk For Teachers

AI and Character

June 11, 2024 AVID Open Access Season 3 Episode 191
AI and Character
Unpacking Education & Tech Talk For Teachers
More Info
Unpacking Education & Tech Talk For Teachers
AI and Character
Jun 11, 2024 Season 3 Episode 191
AVID Open Access

In today’s episode, we'll explore ways to ethically and responsibly use AI in the K-12 classroom. Visit AVID Open Access to learn more.


Show Notes Transcript

In today’s episode, we'll explore ways to ethically and responsibly use AI in the K-12 classroom. Visit AVID Open Access to learn more.


#295 — AI and Character

From AVID Open Acvcess
Podcast: Tech Talk for Teachers
11 Minutes


Paul Beckermann  0:01  

Welcome to Tech Talk for Teachers. I'm your host, Paul Beckermann. 

Transition Music  0:06  

Check it out. Check it out. Check it out. Check it out. What's in the toolkit? What is in the toolkit? So, what's in the toolkit? Check it out. 

Paul Beckermann  0:17  

The topic of today's episode is "AI and Character." In the past five episodes, I've been exploring how AI fits into the expanded 6 Cs Framework. This includes the original Four Cs of communication, collaboration, critical thinking, and creativity. And it also includes citizenship and character, which were added by Michael Fullan and Geoff Scott in 2014. They call this new model the Six Cs of Deep Learning. Today, I'll be focusing on the last C in the series: Character. 

You know, in many ways, citizenship and character are intertwined. You could argue that it takes someone of high character to be a good citizen. Conversely, a good citizen typically demonstrates the traits of high character. They work hand-in-hand. There are also various definitions describing what people mean by character. Definitions will often contain words like courage, resilience, ethics, leadership, personal and social awareness, responsibility, trustworthiness, fairness, and even citizenship. Most people would agree that any of these descriptors are important. For today's episode, I'm going to focus on a couple of themes that run through these descriptions: ethics and responsibility. I guess you could sum this up by saying, "doing the right thing." As teachers and students use AI in K–12 education spaces, how can they do the right thing? 

Transition Music  1:46  

How do I use it? Integration inspiration. Integration ideas. 

Paul Beckermann  1:57  

Let's start by looking at some considerations for teachers. So what should teachers consider as they plan to use AI for designing student-facing lessons? How can they approach the use of AI both ethically and responsibly? Number one, put student safety first. This means a number of things, including taking time to gain at least a general understanding of the digital tool you'd like to use with students. You don't need to be an expert, but you should have a basic understanding of what the tool can be used for and how it works. In the case of generative AI tools like ChatGPT, that means understanding that responses are being generated from data that's been mined from across the internet. That information may be inaccurate and may contain bias. By recognizing this, you can better assess the limitations, risks, and safety concerns that might arise when using it. For example, if you're using AI as a teacher to plan, this means never entering any personal student data into the system. If you're using it with students, you will want to help them understand that they should evaluate all responses for accuracy and bias before accepting the results as fact. 

Number two, review terms of views. Before having students use any digital tool in your classroom, it's important to review the Terms of Use Agreement and requirements for that piece of software. For example, OpenAI, the parent company of ChatGPT, says the following about its own Terms of Use Agreement. ChatGPT is not meant for children under 13. And we require that children ages 13 to 18 obtain parental consent before using ChatGPT. While we have taken measures to limit generations of undesirable content, ChatGPT may produce output that is not appropriate for all audiences or all ages. And educators should be mindful of that, while using it with students in classroom contexts. We advise caution with exposure to kids, even those who meet our age requirements. And if you're using ChatGPT in the education context for children under 13, the actual interaction with ChatGPT must be conducted by an adult. So these are important parameters to be aware of, and each tool is slightly different. 

Number three, review school and district policies. Nearly every school has an acceptable use policy that guides what technology can be used with students, as well as any requirements for that use. Be sure to review yours. Also find out if your school has a formal process in place for reviewing new technology. If you're not sure, check with your local technology or building leadership. Number four, respect differing opinions regarding the use of AI. Not everyone feels the same way about using artificial intelligence tools. And that's okay. As you introduce these digital tools into your classroom, you'll need to be both respectful of those differing opinions, and also be prepared to modify student experiences accordingly. This may mean allowing a student to opt out of an AI experience and providing them with an alternative. While this can be inconvenient, it's another part of the character equation, respecting others' points of view. And number five, adopt new technology thoughtfully. Adopting new tech can feel complicated and confusing at times. When in doubt, err on the side of caution and seek out more information. 

All right, next, let's move on to considerations that apply to both teachers and students. So while some considerations are unique to either teachers or students, others apply to all users. One of the broader considerations is understanding the limitations of AI, especially generative AI. By understanding the potential limitations, you and your students can better evaluate when it is appropriate to use artificially intelligent tools and when it is not. Two key factors to be aware of our bias and accuracy and generated information. So let's look at bias. The Oxford English Dictionary defines bias as the tendency to favor or dislike a person or thing, especially as a result of a preconceived opinion, partiality, or prejudice. There's near universal agreement that generative AI can produce biased responses. This makes sense because it's pulling content created by biased human beings. If the source is biased, the outcome may be biased, as well. As we use generative AI, it's important to be aware of this potential bias, and to review any generated outputs with a critical eye. We need to ask ourselves, is this response biased and, if so, what types of bias have crept in? The other limitation that I mentioned is accuracy. It's also important to recognize that generative AI can make mistakes. These mistakes are often called hallucinations. When you think about how genitive AI works, this is not surprising. The AI is essentially looking for probable patterns across billions of bits of information. A generative AI chatbot produces its answers by predicting the next most likely word in a series of words and sentences. It's not really thinking. Rather, it's building sentences based on probable patterns. In doing so, it can make mistakes. Therefore, we all need to review the accuracy of the content we receive from an AI query. When appropriate, we should conduct our own follow-up research to verify that the AI-produced content is accurate. 

Finally, let's look at some considerations for students. You know, perhaps the biggest concern teachers have about generative AI is that students will use it to cheat. In response, some have resorted to using AI detection software to catch students cheating. Honestly, while the use of this type of tool is really tempting, it's seldom effective. Not only are these tools unreliable, but they can return false positive reports that result in a student being accused of cheating when they did not. Rather than focusing on a "gotcha" approach, it's usually better to focus on educating students and setting clear expectations. This is where character education comes into play. By engaging in open, honest dialogue with our students, we can collaboratively agree on how to use AI with academic integrity in our classrooms. When having these conversations with students, here are a few considerations. 

Let's count it. Let's count it. Let's count it down. 

Number one, define academic integrity with your students have an honest conversation about what it means to have integrity, in general. And then, specifically, what does it mean to have academic integrity? You might even draft the definition together. Number two, talk about cheating. Cheating isn't secret. Teachers and students both know it happens. Therefore, facilitate a classroom conversation about why cheating happens, and what can be done to reduce or eliminate that cheating. Students might acknowledge that cheating happens when they're unmotivated, underprepared, or maybe stressed. Identifying the causes can help you discuss with them strategies for alleviating the motivating stressors that lead to cheating. Number three, develop an AI Use Agreement with your students. It can be an incredibly powerful experience to have a classroom of students collaborate on writing guidelines that they will use to regulate their own behavior. What's acceptable? What's not? What happens if somebody violates the classroom expectations? Having these open discussions and coming to a collaborative agreement can lead to student buy-in, better understanding, and more ethical and acceptable use of AI in the classroom.

Number four, use scenarios. As you discuss and work through the nuances of using generative AI in the classroom, consider posing realistic scenarios for your students. Have them work through the nuances of each example and provide a recommended action, as well as a rationale for that plan of action. Not only will you be defining how students would respond in your classroom, but you'll also be setting them up for future successes. In many ways, you're helping them develop their academic character action plan. And number five, model and practice. Once you have developed common expectations and practice them hypothetically, you can advance this practice to real academic situations, you could choose to first guide the entire class by entering AI prompts for the class to see and respond to together. You could also have groups of students work to critique sample prompts, responses, or practice assignments. The gradual release strategy of "I do, we do, you do" can be a really effective approach. No matter how you structure it, practice will help crystallize expectations and begin to make these behaviors habitual. Overall, discussing character can be complicated. So can beginning the use of generative AI in academic settings. By learning about each, having open conversations, and moving forward with positive intent, you can create a space in your classroom for students to both develop strong character, while also learning how to responsibly and ethically work in a world of generative AI. 

To learn more about today's topic and explore other free resources, visit AvidOpenAccess.org. Specifically, I encourage you to check out our collection of articles about AI. You can find it by going to AvidOpenAccess.org and searching for "AI in the K–12 Classroom." And of course, be sure to join Rena, Winston, and me every Wednesday for our full-length podcast, Unpacking Education, where we're joined by exceptional guests and explore education topics that are important to you. Thanks for listening. Take care, and thanks for all you do. You make a difference.