Unpacking Education & Tech Talk For Teachers

Blueprint for an AI Bill of Rights

July 23, 2024 AVID Open Access Season 4 Episode 7
Blueprint for an AI Bill of Rights
Unpacking Education & Tech Talk For Teachers
More Info
Unpacking Education & Tech Talk For Teachers
Blueprint for an AI Bill of Rights
Jul 23, 2024 Season 4 Episode 7
AVID Open Access

Explore the Blueprint for an AI Bill of Rights, a document created by the White House Office of Science and Technology Policy that can help guide schools and society at large in the safe and equitable adoption and integration of artificial intelligence. Visit AVID Open Access to learn more.

Show Notes Transcript

Explore the Blueprint for an AI Bill of Rights, a document created by the White House Office of Science and Technology Policy that can help guide schools and society at large in the safe and equitable adoption and integration of artificial intelligence. Visit AVID Open Access to learn more.

#307 – Blueprint for an AI Bill of Rights

10 min
AVID Open Access


Paul Beckermann  0:01  

Welcome to Tech Talk for Teachers. I'm your host. Paul Beckermann.


Student  0:06  

Check it out. Check it out. Check it out. Check it out. What's in the toolkit? What is in the toolkit? What's in the toolkit? Check it out. 


Paul Beckermann  0:16  

The topic of today's episode is the Blueprint for an AI Bill of Rights. As with any technology, artificial intelligence offers some incredible possibilities, as well as some potential concerns. As educators, it's our responsibility to always work to maximize those positives while minimizing the dangers. We all know this. In the context of technology, this means carefully vetting the technology that makes its way into our schools and our classrooms. It means monitoring terms of use agreements, including how personal data is collected and shared. It means crafting policies that protect students and staff, and it means making sure that these policies and protections apply evenly across our school populations. In an effort to provide some guidance in this area, the White House Office of Science and Technology Policy has created a document called the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. It's not really a formal policy, but rather a thoughtful guiding document that can help guide schools and society at large in the safe and equitable adoption and integration of artificial intelligence. The authors specifically state that this document is, "Intended to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems." The document was created after a year of comprehensive information gathering from a wide range of sources, including experts, leaders, and widespread input from the general public. In the blueprint, you'll find a set of five principles and associated practices to help guide the design, use, and deployment of those automated systems to protect the rights of American public in the age of artificial intelligence. Here's an overview of the five principles that are outlined in the blueprint. 


Student  2:16  

Let's count it. Let's count it. Let's count it down. 


Paul Beckermann  2:19  

Number one is safe and effective systems. The Blueprint states, "You should be protected from unsafe or ineffective systems." This is essentially reinforcing the idea that we should put safety first. This includes carefully studying and testing new software before we deploy it. After that, if the program meets our expectations, yeah we can adopt it. If not, you know, we should be prepared to move on if the system is not safe and effective. We must also remember that software changes over time, so we should continue monitoring even after we've implemented it. 


Number two is algorithmic discrimination protections. So an algorithm is the mathematical equation behind the computer program that directs it how to act. For the second principle, the blueprint reads, "You should not face discrimination by algorithms and systems should be used and designed in an equitable way." So in other words, we need to make sure that the automated systems we implement and use don't contribute to any discriminatory practices, whether they're intentional or not. The programs need to be written to promote equity. So this pertains to the system itself and that coding, but it also pertains to how the system is used, even well designed systems can be misused. 


Number three is data privacy. In the area of data privacy, the blueprint states that users should, "be protected from abusive data practices via built in protections and you should have agency over how data about you is used." That means on the administrative end, we need to pay careful attention to the terms of use and make sure that only necessary data is collected and only necessary permissions are granted. It also means that user behaviors should not be subject to what the document refers to as, "unchecked surveillance." This is especially true when systems are automated and students are judged or evaluated based on algorithms alone. An example of this might be a software designed to detect cheating and plagiarism. While these tools can be informative and really helpful in teaching students how to be better digital citizens, we need to remember that these programs make mistakes and can return false positive reports. This is especially true of AI detection tools, which seldom keep up with the development of AI in general. They're largely ineffective and can lead to false accusations of cheating. As educators, we need to be very careful about how these tools are used with our students, if they should even be used at all. 


Number four, notice and explanation. So this principle essentially says that, "You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you." The communication of terms of use should be clear and written in plain, understandable language, not dense legalese that only a lawyer can understand. Some questions that users should have answered include how and when will the system be used and how will it be used to make decisions that impact me? Some language in the guidance attached to this principle explains this further, and I like how it's stated. It reads, "Automated systems now determine opportunities from employment to credit and directly shape the American public's experiences, from the courtroom to online classrooms in ways that profoundly impact people's lives. But this expansive impact is not always visible." That last part is really important, because we don't always know what the algorithms are doing under the hood. The principle is essentially saying that the intent or use of any software and the data it collects should be clear and transparent. No hidden agendas should be involved. Nobody should be taken by surprise. 


And number five, human alternatives, consideration, and fallback. The fifth and final principle states, "You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter." Here's a section of the policy that I found particularly helpful in understanding this point. It reads, "There are many reasons people may prefer not to use an automated system. The system can be flawed and can lead to unintended outcomes. It may reinforce bias or be inaccessible. It may simply be inconvenient or unavailable. Or it may replace a paper or manual process to which people have grown accustomed yet members of the public are often presented with no alternative or are forced to endure a cumbersome process to reach a human decision maker once they decide that they no longer want to deal exclusively with the automated system or be impacted by its results." In other words, we need to make sure that the technology isn't limiting some people's access and success, but rather improving it. I think of my own kids as they have graduated and learned to navigate adulthood, and I've watched the experiences they've had navigating the adult world, from insurance to getting driver's license to buying a house. A lot of these processes now require access to the internet or a smartphone or a credit card. Even getting into some concerts, if you don't have a smartphone, you can't use the electronic access and they don't offer printed tickets. Are people being limited to what services and opportunities they can access because of technology? Are we setting any barriers like this up in our schools? While it may take some work to set up human redundancies in these situations, they are important to making sure all of our students have an equal opportunity for success. We don't want to add barriers. We want to break them down. So those are the five principles, and they're good reminders for us when we implement new technology in our classrooms. In addition to the five principles listed in the Blueprint for an AI Bill of Rights, the authors also included a document titled Principles to Practice. This includes more context, details, and examples which can help bring the blueprint to life and make it more understandable. Essentially, it breaks down why the principle is important, what expectations we should be able to have about automated systems, and what the principles look like in practice. 


Student  8:43  

How do I use this? Integration inspiration, integration ideas! 


Paul Beckermann  8:48  

Now, while not designed specifically for education, both the principals and the accompanying documents can help guide us as we integrate more and more technology into our schools and classrooms. It helps to remind me that every student in my classroom should have an equal opportunity for success, technology or not. Bias or lack of accessibility should never hinder one student's opportunity for success over another's. Technology should not provide biased data that can be used to inaccurately shape a student's learning experiences or chances of success. We not only need to keep our students safe, but we need to empower them all and make sure that the system and the technologies that are in place have a positive impact on all of our students. To take things one level higher, if you're involved in writing school policy or writing your own classroom guidelines, I highly recommend you consider this guidance as well. In fact, if you have time, I encourage you to skim the actual document. It's pretty short, really, and easy to review. You can find it on the official US White House website, and we'll put a link to it in our show notes as well, for easy access. Or you can online and Google Blueprint for an AI Bill of Rights. Look for the option from the U.S. White House. Just like our Constitutional Bill of Rights helps to protect our freedoms as U.S. citizens, this Blueprint for an AI Bill of Rights can protect our students as we wade deeper into a world permeated with artificial intelligence. 


To learn more about today's topic and explore other free resources, visit AVIDopenaccess.org. Specifically, I encourage you to check out the collection called AI in the K–12 classroom. And of course, be sure to join Rena, Winston, and me every Wednesday for our full length podcast, Unpacking Education, where we are joined by exceptional guests and explore education topics that are important to you. Thanks for listening. Take care and thanks for all you do. You make a difference.


Transcribed by https://otter.ai