Unpacking Education & Tech Talk For Teachers
Unpacking Education & Tech Talk For Teachers
"Cheap Fakes"
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In today’s episode, you'll discover how four types of cheapfakes are being used to spread misinformation. Visit AVID Open Access to learn more.
#281 — Cheap Fakes
10 min
AVID Open Access
Keywords
fakes, misleading, cheap, misinformation, edited, media, fake, image, text, deep, social media, clip, context, photo, video, created, simple, circulated, explains, real
Speakers
Paul (98%), Transition (2%)
Paul Beckermann 0:01
Welcome to Tech Talk for Teachers. I'm your host, Paul Beckermann.
Transition Music 0:05
Check it out. Check it out. Check it out. Check it out. What's in the toolkit? What is in the toolkit? So, what's in the toolkit? Check it out.
Paul Beckermann 0:16
The topic of today's episode is Cheap Fakes. In recent episodes of Tech Talk for Teachers, I've talked about both deep fakes and misinformation. For those of you who missed that conversation, deep fakes are pieces of media that have been generated or manipulated using artificial intelligence to make something look real, even when it's not. Misinformation is information that is incorrect, but is perceived to be accurate and true. As deep fakes get more of our attention and technology gets more sophisticated to create these, it's easy to overlook a simpler type of misinformation—the cheap fake. That's right, we not only need to navigate the world of deep fakes, but cheap fakes, as well. Cheap fakes are a lot like deep fakes in that they involve the manipulation of media to make something look real that isn't. However, they're different in that cheap fakes take very little skill to create, and that can be made with simple and accessible tools. You don't have to be an AI whiz or have a lot of money to make a cheap fake. The term cheap fake was coined by Britt Paris and Joan Donovan. They define a cheap fake as an AV manipulation created with cheaper, more accessible software, or none at all. MIT describes a cheap fake as a piece of media that has been crudely manipulated, edited, mislabeled, or improperly contextualized in order to spread disinformation. And the News Literacy Project describes cheap fakes as the less polished and more believable cousin, which takes real audio, images, and videos and cheaply manipulates or decontextualizes them. Bret Schafer from Michigan Online at the University of Michigan says, "Although deep fakes are more convincing, cheap fakes have fewer barriers to being created and disseminated and still misinform and mislead." To get a better idea of what cheap fakes are, let's take a look at some examples.
Transition Music 2:22
Let's count it, let's count it. Let's count it down.
Paul Beckermann 2:26
Number one, misleading text headlines and captions. Associating misleading text with a media clip provides a false context meant to confuse or persuade an audience. This can happen when a picture, video, or audio clip is accompanied by text that encourages the reader to misinterpret the attached media. This might take the form of misleading text in an article, a false heading, or poor caption. Oftentimes, the misleading text does not authentically go with the photo, video, or soundbite. Even when the media clips are authentic, false text captions and headlines can create a misleading or false narrative. For example, a spreader of misinformation might attach a provocative caption or headline like "Corruption is on the Rise" to an image of an individual. Even if the person in the image is denouncing corruption, the misleading caption can skew public perception and unjustly associate them with the very issue they oppose. This simple tactic can completely change the perceived context without altering the original media. It takes no technical skills, and if done through social media, costs nothing. Another example could include the use of old footage or media from another location to make it look like there's a current problem. Perhaps an out-of-context picture of a homeless encampment is posted on social media, and the text, "Community Danger Rises Around Growing Encampments." Even though it's not a photo of the actual area of concern, its association with the text will make readers and viewers assume that it is. Using misleading text, headlines, and captions might be the easiest form of cheap fake to produce by simply posting a talking point and attaching some form of media that appears to corroborate it, even if it's totally unrelated. If it looks real, it will be perceived that way.
Number two, audio and video manipulation. There's considerable overlap with deep fakes on this one. When AI is used to fabricate or alter the media, they're considered deep fakes. However, when audio or video clips are edited, with simple and available software programs, they're cheap fakes. Two of the most famous examples of this type of cheap fake involve Nancy Pelosi when she was Speaker of the United States House of Representatives. In one incident, a video of her speaking was slowed down by 75% to make it appear as if she was slurring her words. The video was shared widely on social media in an effort to damage her credibility and question her competence. The Associated Press notes another incident illustrating Nancy Pelosi as the victim of a cheap fake. This time, it involved an edited video of her at the State of the Union address. The video footage was edited and remixed to make it appear as if she tore up her script while the President was presenting a solemn tribute to a military family. While she did tear up her script, it was not done during that tribute, but rather after the event was finished. The misleading edit was done to portray her as unpatriotic and was again circulated on social media.
Number three, selective cropping. It can be just as misleading to creatively cut out portions of a video, image, or audio clip, as it can be to remix those clips. Sometimes this is done by splicing unrelated clips together to give the illusion of a single event. Other times, the removal of a portion of a media clip can cast it in a new light, projecting a misleading meaning. For example, MIT describes a viral video of President-Elect Joe Biden in which he says, "We have put together, I think, the most extensive and inclusive voter fraud organization in the history of American politics." The clip is presented in a way that makes President Biden look like he's admitting to fraud. However, MIT explains that when properly contextualized in terms of the full speech, Biden's comments actually describe a program to protect voters in the event of baseless litigation regarding the election result. The edit turned an authentic piece of media into a cheap fake, easily created with simple video editing software. In another example, a photograph of a political rally was cropped to show only a small group of attendees, suggesting poor turnout. The full image, however, revealed a much larger crowd. This selective cropping was used to mislead the audience about the event's popularity and support. The process of selective cropping misleads by only telling part of the truth and doing it in a way that's deceptive.
And number four, edited images. When images are created using generative AI, they're considered deep fakes. However, easily accessible software such as Adobe Photoshop, make it inexpensive and easy to create a cheap fake image. Even if they're crudely produced, they have the potential to fool people. Seeing is believing, after all, and many consumers don't take the time to study media in any real detail. They often don't have the time or interest in doing so. After Hurricane Harvey in 2017, an image supposedly showed a shark swimming alongside a flooded Houston highway. Although circulated widely on social media, the image was actually a cheap fake that has reappeared several times in various natural disasters around the world. It's been used almost to the point that it's becoming a meme. Still, some people continue to believe that it's real. And in the case of the Houston example, they believe it's a real consequence of the hurricane's flooding. Similarly, a cheap fake photo of former President Donald Trump being arrested circulated on social media. At first glance, it looked authentic. However, by looking more carefully at his neck and hand, it becomes clear that something just didn't look natural about the photo. Someone pasted the likeness of former President Trump into the scene to make it appear like he was being arrested. This type of image could be generated using artificial intelligence tools, but it could also be done as a cheap fake with inexpensive photo editing software and fairly little expertise.
Although cheap fakes are less sophisticated than deep fakes, they're still being used to effectively spread misinformation. Even if the images, video, or audio seems too outrageous to be true, people still believe them to be true. The News Literacy Project explains this phenomenon saying deeply rooted biases tempt people to quickly jump to conclusions when they see social media content that appears to confirm their preconceived beliefs. This is especially true when it paints political foes in an unflattering light. The project goes on to explain how we might begin to combat this form of misinformation. They write: Viral social media posts rarely contain enough information to accurately convey complex realities, and are often presented out of context and attached to baseless and sensational assertions. By practicing a little restraint and seeking out additional information, such as standards-based news coverage, or a high quality fact check, people can restore the necessary context to viral outrage posts." The continued use of cheap fakes is one more reason to teach your students to be savvy consumers of information. We need to encourage them to ask probing questions like: Who created this? Why? Could this have been edited? What's the source? With a healthy amount of skepticism, we can encourage them to confirm and verify information, rather than being misled by cheap fakes and other forms of misinformation.
To learn more about today's topic and explore other free resources, visit AvidOpenAccess.org. And, of course, be sure to join Rena, Winston, and me every Wednesday for our full-length podcast, Unpacking Education, where we're joined by exceptional guests and explore education topics that are important to you. Thanks for listening. Take care, and thanks for all you do. You make a difference.
Transcribed by https://otter.ai