Unpacking Education & Tech Talk For Teachers

Misinformation and Disinformation: A Growing Problem

April 02, 2024 AVID Open Access Season 3 Episode 171
Misinformation and Disinformation: A Growing Problem
Unpacking Education & Tech Talk For Teachers
More Info
Unpacking Education & Tech Talk For Teachers
Misinformation and Disinformation: A Growing Problem
Apr 02, 2024 Season 3 Episode 171
AVID Open Access

In today’s episode, we'll explore the basics of misinformation and disinformation and explore how these are both becoming growing problems. Visit AVID Open Access to learn more.


Show Notes Transcript

In today’s episode, we'll explore the basics of misinformation and disinformation and explore how these are both becoming growing problems. Visit AVID Open Access to learn more.


#275 — Misinformation and Disinformation: A Growing Problem

Keywords
misinformation, disinformation, ai, content, spread, people, social media, article, information, share, misleading, true, satire, sites, typically, generated, news, american psychological association, problem, post

Speakers
Paul (97%), Transition (3%)


Paul Beckermann  0:01  

Welcome to Tech Talk for Teachers. I'm your host, Paul Beckermann. 

Transition Music  0:05  

Check it out. Check it out. Check it out. Check it out. What's in the toolkit? What is in the toolkit? So, what's in the toolkit? Check it out. 

Paul Beckermann  0:17  

The topic of today's episode is Misinformation and Disinformation: A Growing Problem. The last two weeks on Tech Talk for Teachers, we've been exploring deepfakes and ways that we can develop effective media literacy skills to protect ourselves from being misled. Today, I'm going to continue with the media literacy theme, and dig into the topic of misinformation and disinformation. Let's start with some definitions. Misinformation. Misinformation is information that is factually wrong, but is presented as being true. It's not always intentional, however, and people who share misinformation often don't realize that the information is not true. Still, it's harmful because false information is being spread. And that information can end up misleading people. 

Disinformation. Disinformation is both the same and different. It's the same because it involves distribution of factually untrue information. However, disinformation is different because it involves the deliberate sharing of knowingly false information with the intent to deceive someone else. Essentially, misinformation becomes disinformation when it's being spread on purpose with the intent to mislead. Both of these problems have gotten a lot worse in recent years because of technology, including things like artificial intelligence, social media, and deepfakes. This has made it much easier to create and share fake and misleading content. Fortunately, people are starting to notice and news outlets are sounding the alarms. Newsguard, a website that tracks AI-enabled misinformation, reports that as of February 1, 2024, they had identified 676 unreliable AI-generated news websites. These are sites that have little to no human oversight, and contain mostly AI-generated content. In other words, there's likely no one monitoring the quality of truthfulness of the content being posted. The goals of these sites is really to generate clickbait, or getting people to click on their content in order to earn advertising revenue. As an example of the type of content being created, Newsguard shared that one of the sites used AI to rewrite a piece of satire about Netanyahu's nonexistent psychiatrist who committed suicide, allegedly, even though it didn't ever happen. As a satire, the content's not true. And it's not meant to be conveyed as truth. However, a secondary site took that content of satire and turned it into a news story, presenting it as fact. That story quickly spread as misinformation on an Iranian broadcast channel, and then it was spread further by users of Tiktok, Reddit, and Instagram. 

MSNBC published an article in December of 2023, titled AI-Generated Weapons of Mass Misinformation Have Arrived. That title really resonated with me. Weapons of mass misinformation. It really calls attention to the scale of the problem. The article cites a Newsguard study, which discovered a network of over a dozen TikTok accounts that used AI text to speech software to spread political and health misinformation in videos. Those videos have been viewed over 300 million times. The Washington Post also uses cautionary language, calling the use of AI to generate fake news a misinformation superspreader. They point out that this AI-generated fake news is often about elections, wars, and natural disasters, and that the publication of this type of content is on the rise. In fact, they report that websites hosting AI-created false articles have increased by more than 1,000% in the last half of 2023. To make matters worse, these sites are typically given generic names that sound legitimate; titles like Daily Time Update or Ireland Top News. Some of these sites intentionally mix in human-generated articles, to give them more credibility and make it less obvious that the majority of their content is AI-generated. So why do people spread misinformation and disinformation? Well, let's look at each of these types of false information separately, since they're shared for different reasons. 

Transition Music  4:48  

Here's the, here's the, here's the tool for today. Here's the tool for today. 

Paul Beckermann  4:54  

Let's start with disinformation. The two most common reasons people spread disinformation are politics and money. As for politics, propaganda works. Whether the information being spread is true or not, the political operatives have learned that they can influence people's opinions in voting behavior through disinformation campaigns. Political disinformation is all about swaying public opinion, which can translate into votes and, ultimately, power. The second biggest reason for disinformation, money, comes down to clicks. The more clicks a website or social media post can draw to an advertiser's product, the more money that content creator makes. The practice of posting sensational content and eye-catching headlines that attracts clicks is called clickbait. Once again, it doesn't matter if the content's accurate and true. All it matters is that it attracts click. It's all about the money. Clickbait featuring fake news isn't new, but generative AI tools like ChatGPT have made it infinitely easier to produce large amounts of fictitious clickbait content. What used to take an army of human writers now just takes a well-crafted prompt and an AI chatbot. So let's look at misinformation next. 

As I mentioned in the definitions to start the episode, misinformation is a bit different. Misinformation is often spread by the victims rather than the perpetrators. I call them victims because they're being targeted. They're getting lured in and tricked into believing something to be true. And unfortunately, many of them will pass along that unknowingly fallacious content to others. Studies have shown that some people are more prone to this than others. The American Psychological Association, or APA, points out that people are much more likely to share content when they engage with misinformation that aligns with their personal identity, matches their social norms, and elicits a strong emotional response. Specifically, algorithms that track online user behavior have shown that content causing anger or outrage typically gets the strongest reaction. These reactions can quickly translate to impulsive actions, especially on social media where it's so easy to share misinformation with others by liking or sharing a post. With one click of the share button. Misinformation can be spread to all of that person's friends and followers, then those people might share it with their friends, who again pass it along to even more people. Because sharing is so easy, it doesn't take long for misinformation to go viral. This is especially true in closed online communities, where most of the members of that group share the same ideological viewpoints. This homogenous environment is sometimes called an echo chamber because everyone in that group holds the same core opinions, which leads to similar ideas being shared over and over again throughout the community, like an echo. The National Institute of Health adds additional context to this saying "Misinformation flourishes when people don't have access to good information." People living in a virtual echo chamber, like a social media friends group, typically don't hear all sides of the story. They just hear different versions of that same story over and over again. And if people only hear one point of view or one side of the story, they will often adopt that point of view as their own, or reinforce the point of view that they already have. It's all they know. 

Confirmation bias also plays a role in the spread of misinformation. The APA dictionary of psychology defines confirmation bias as the tendency to gather evidence that confirms pre-existing expectations, typically by emphasizing or pursuing supporting evidence, while dismissing or failing to see contradictory evidence. An American Psychological Association article from November 2023, states that we're less likely to question information that aligns with our point of view. In addition to that tendency, if we believe it, and it makes us mad or anxious or scared, we're likely to pass it on. We're also more likely to believe false information that we hear repeatedly, a phenomenon the APA calls the illusory truth effect. The article, "How and Why Does Misinformation Spread?" points out that most online misinformation originates from a small minority of superspreaders. These originators typically use social media to motivate people to continue sharing and spreading the inflammatory and false information to other people. Traditional media and news outlets like newspaper and televisions have more stringent review protocols, which reduce the likelihood of falsehoods getting passed off as new stories. However, online spaces mostly lack this type of oversight. And not surprisingly, it's in these social media spaces where misinformation and disinformation are thriving the most.

Amplifying this problem is the fact that more and more people are getting their news online and from social media. A 2023 Pew Research factsheet reports that 86% of adults often, or sometimes, get their news from a mobile device like a smartphone, with half of Americans at least sometimes getting their news from social media. Since digital spaces are the easiest ways to post and share misinformation, these trends are putting more and more people in the path of that fake news and misinformation. Mis- and disinformation is a complex problem, and we can't possibly cover it all in one episode. So I hope you'll join me next week as I continue this conversation to help you better understand how people try to spread information. I'll be breaking down the techniques that are being used in online disinformation campaigns. 

To learn more about today's topic, and explore other free resources, visit AvidOpenAccess.org. And, of course, be sure to join Rena, Winston, and me every Wednesday for our full-length podcast, Unpacking Education, where we're joined by exceptional guests and explore education topics that are important to you. Thanks for listening. Take care, and thanks for all you do. You make a difference.