Unpacking Education & Tech Talk For Teachers

Protecting Yourself From Misinformation

AVID Open Access Season 3 Episode 175

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 12:56

In today’s episode, we'll explore six strategies you can use to protect yourself, and your students, from misinformation. Visit AVID Open Access to learn more.


#279 — Protecting Yourself From Misinformation

12 min
AVID Open Access

Keywords

misinformation, disinformation, disinformation agent, number, protect, ai, sources, shared, content, people, topic, requiring, awareness, check, caulfield, post, impersonation, identifying, happened, information

Speakers

Paul (98%), Transition (2%)



Paul Beckermann  0:01  

Welcome to Tech Talk for Teachers. I'm your host, Paul Beckermann. 


Transition Music  0:06  

Check it out. Check it out. Check it out. Check it out. What's in the toolkit? What is in the toolkit? So, what's in the toolkit? Check it out. 


Paul Beckermann  0:17  

The topic of today's episode is Protecting Yourself from Misinformation. In the past couple of episodes, I've shared the basic principles of misinformation and disinformation campaigns. Today, I want to take this topic to the action step level. Essentially, how can we minimize the impact of misinformation and disinformation on our lives? Let's take a look at six specific action steps we can take to protect ourselves from misinformation. 


Transition Music  0:46  

Here are your six, here are your six, here are your six tips. 


Paul Beckermann  0:52  

Number one, understand the basics. It's probably not surprising that awareness is the first step to protecting ourselves. We need to know what both dis and misinformation are, so we know what we're looking for, and can recognize them when we see them. We also need to understand how these falsehoods can become a problem and what threats they may present to us both individually and collectively as a society. This was the topic of the first episode in the series. And if you missed it, I encourage you to go back and check it out. It's called Misinformation and Disinformation: A Growing Problem. 


Number two, understand disinformation strategies. So this action item also involves awareness. If we're aware of how the bad actors are trying to draw us in and manipulate us with disinformation, we can be on the lookout for their potential traps. If we don't know what to look for, it's awfully hard to recognize it when we see it, and we're much more likely to get fooled. I addressed this topic in the last episode. Again, if you missed it, I recommend you go back and give it a listen. It's called Identifying Disinformation Strategies. And before we move on to number three, I do want to let you know about a resource that can be really helpful in learning about disinformation strategies. It's a free, interactive learning experience called Bad News from the University of Cambridge Social Decision-Making Lab. This game-based activity encourages you to take on the role of a disinformation agent in order to raise awareness of their tactics. Your challenge is to mislead as many people as possible as you work through seven methods of pushing disinformation: impersonation, emotion, polarization, conspiracy, discredit, and trolling. The idea is not to train you how to be a disinformation agent, but by going through this experience, you'll gain an understanding of what disinformation agents are trying to do to you. By going through the experience, you can become a more savvy consumer of information. To show your growth, the Bad News game even gives you a pre- and post-test. Although I went into the game with a good deal of background knowledge already, I found that I got better at identifying misinformation because of the experience. This could be a really good activity for both you and your students, especially the students in the upper grades. 


Number three, be aware of what tech companies are doing and not doing. Yes, one more awareness step. It's important that you know what the large tech companies are doing to protect you so you don't overestimate the amount of protection that they're providing. Since the launch of ChatGPT brought this topic more prominently to the public's attention—and with the presidential elections on the horizon—U.S. government officials have been putting a lot of pressure on major tech and social media companies to address the issue of disinformation. Each company has taken a slightly different approach. Forbes published an article in January of 2024 that outlines a few of them. Here are a few of the highlights. Open AI, the owner of ChatGPT, is probably doing the most to prevent disinformation. It says it will ban politicians' political campaigns and lobbying using its tools and stop applications that might deter people from voting. They'll also be prohibiting the impersonation of candidates, officials, and governments, and they hope to introduce some authentication tool, though it's unclear exactly what that will look like. Meta, the parent company of Facebook, Instagram, and WhatsApp, is requiring labeling on posts for state-controlled media and blocking any ads by those sources which target people in the U.S. They're also barring new political ads in the final week of the American election campaign, and they're requiring disclosure if AI is used to create or alter election-related content. Google is requiring sources to disclose if content has been generated or altered with AI. They're also restricting their AI chatbot, Bard, which is now called Gemini, from answering certain election-related queries. YouTube will require labels on AI-generated content. And X, formerly known as Twitter, is relying on crowdsourcing to fact check information. Essentially, it's leaving users to police themselves on the platform. While these labeling practices and election-related restrictions will be helpful, they'll likely not completely stop those who are serious about spreading disinformation. And we shouldn't rely on these companies to fully protect us with their self-policing policies. We need to own this responsibility and consume contents of these sites with caution and a critical eye. 


Number four, prebunking. Prebunking is the act of warning people ahead of time about specific falsehoods they're likely to encounter, and explain why a source might spread that misinformation. Prebunking is based on something called the Inoculation Theory. The idea is that you can inoculate or protect people against misinformation by warning them that it's coming. National Public Radio News, NPR, explains the Inoculation Theory saying, if you learn about potential misinformation or an attempt to manipulate you ahead of time, you can develop mental armor or mental defenses against that manipulation attempt. The Bad News game I mentioned in point number two was developed as a way of inoculating people from the harms of misinformation. NPR explains that the idea is to show people the tactics and tropes of misleading information before they encounter it in the wild, so they're better equipped to recognize and resist it. Prebunking and inoculation attempt to counter what is called the Misinformation Effect. The Misinformation Effect is a theory developed by psychologist and Professor Elizabeth Loftus. Her research found that people's memories don't always work perfectly, and that they can be reprogrammed with post-event misinformation or disinformation. Her experiments have shown that the introduction of alternative views or fictional details, even after the fact, tend to change post-event memories, many times without the person even being aware it's happening. Sometimes this happens innocently through unintentional misinformation, but it can also be done intentionally as targeted disinformation. For example, something might happen that damages a person's status or reputation. In response, that person might start spreading alternate versions of what happened to confuse the truth. As people hear these alternate versions of what happened, the truth becomes muddy and people's memories actually become reprogrammed to the point where they honestly believe the new version instead of their original memory. This is most likely to happen when the new version aligns with their desires or beliefs. Instead of believing a reality that conflicts with their pre-existing beliefs, they end up believing a falsehood because it's more comfortable and fits into how they want to see the world. Loftus suggests that doctored photos, fake news stories, and repeated lies can all confuse a person's memory of what really happened, or what was said. Prebunking is an effort to head off this misinformation effect before it begins. Of course, prebunking isn't always easy to do, since it's dependent on predicting ahead of time what misinformation might be shared. 


Number five, follow a diversity of news sources. This action step is good practice in general, but it can be especially effective in preventing people from falling for misinformation or fake news. It's based on the understanding that the more we know, the less likely we are to be fooled. By reading a variety of news sources, we'll also be more in tune with the diverse opinions that evolve around controversial issues. This awareness can make it easier to identify an extreme perspective that's being shared through misinformation. As is often the case with fighting misinformation, knowledge is power. If we don't listen to diverse opinions and get our news from a variety of places, we're likely to find ourselves living in an echo chamber, where we hear the same opinions repeated over and over until we believe it's the unanimous perspective. This is especially true on social media, where we might only be friends or followers of those who share similar points of view. And even when we do have friends with differing perspectives, if we don't engage with them frequently, the algorithms used by social media platforms will begin to filter those out of our feeds and show us more of the types of posts that we have been engaging with. This phenomenon is known as the "filter bubble." We get trapped in a bubble of similar perspectives because opposing viewpoints are filtered out.


And number six, verify questionable content. If something seems a little off, it very well may be. If something seems extreme, you should attempt to verify it before simply believing the message as it is shared. Central Washington University has created an excellent Misinformation and Fake News Guide to help people discern the validity of news sources. It provides a number of case studies, as well as some fake news practice exercises. Throughout the resource. It uses a consistent, four-point process that you can use to verify content. This process has been borrowed from Michael Caulfield's Web Literacy for Student Fact Checkers. Here are the four steps. 


Transition Music  10:36  

Let's count it, let's count it, let's count it down. 


Paul Beckermann  10:39  

The first step, check previous work. See if anybody else has already fact checked an item in question. Some good options to review are factcheck.org, Politifact, The Washington Post Fact Checker, and Snopes. For images, you might want to conduct a reverse image search. The next step is to go upstream. This means finding the original source of the information. Most web content has been repurposed and finding the origin of the information may help you discern its validity, and the reason why it was originally shared. Next, read laterally. This one's really important. Once you find the original source of the information, find out what other people are saying about that source. It could be a publication in general, an organization, or a specific author. Also, see if you can find other trusted resources that are sharing that same content or type of message. Caulfield says, "The truth is in the network." And then finally, circle back. If things get tangled and confused, start the process over using what you've learned along the way so far. This newly formed perspective may lead you to better success the second time around. Spreaders of misinformation will keep getting better, and new technology will come along and help some spread it faster and further. Generative AI tools like ChatGPT are just the latest example. It's up to us to make sure that we are media literate enough to protect ourselves from as much misinformation as possible. It's also up to us to help our students develop these skills. Using these six strategies probably won't protect you 100% of the time for misinformation, but they can go a long way to help you identifying much of it. 


To learn more about today's topic and explore other free resources, visit AvidOpenAccess.org. And, of course, be sure to join Rena, Winston, and me every Wednesday for our full-length podcast, Unpacking Education, where we're joined by exceptional guests and explore education topics that are important to you. Thanks for listening. Take care, and thanks for all you do. You make a difference.


Transcribed by https://otter.ai