Unpacking Education & Tech Talk For Teachers
Unpacking Education & Tech Talk For Teachers
The Great Unwiring (Part III: Brookings AI Study–Risks)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In today’s episode, we'll review the potential risks of AI in education as outlined in The Center for Universal Education at Brookings' study "A New Direction for Students in an AI World: Prosper, Prepare, Protect." Visit AVID Open Access to learn more.
Paul Beckermann 0:00 Welcome to Tech Talk for Teachers. I'm your host, Paul Beckermann.
Transition Music with Rena's Children 0:05 Check it out. Check it out. Check it out. What's in the toolkit? Check it out.
Paul Beckermann 0:16 The topic of today's episode is "The Great Unwiring, Part Three: Brookings AI Study—Risks."
Paul Beckermann 0:24 Today is part three of unpacking the report titled, "A New Direction for Students in an AI World: Prosper, Prepare, Protect" by the Center for Universal Education at Brookings. In parts one and two, I shared an overview of the study as well as the potential benefits of bringing AI into the K-12 classroom.
Today, in part three, I'm going to take a look at the potential risks to AI adoption as outlined in the report. The report outlines these risks in the section titled, "AI Can Diminish Learning if Overused with No Guardrails." The word "can" stands out here. It doesn't say it will diminish learning. Rather, AI adoption can diminish learning if overused with no guardrails. This word choice is important and aligns with the report's overall message that while current risks of AI adoption may outweigh the potential benefits, there is still time to shape its adoption so that it has an overall positive impact on education.
It's in this context that the report outlines the risks. To successfully navigate the disruptive era of AI, we need to go in with our eyes wide open. It's only by seeing the obstacles ahead and taking time to put necessary guardrails in place that we can steer ourselves away from potential pitfalls and move toward a productive pathway with appropriate safeguards in place.
Paul Beckermann 1:47 In its introduction to the risks, the authors point out that the potential for harm is most acute when AI is used with minimal adult supervision and guidance, when AI replaces, rather than enhances, thinking and human interaction, and when students lack adequate preparation for productive AI use. While these potentially negative circumstances may currently be the norm in some places, there is still time to course correct if we make ourselves aware of them. In that context, let's take a look at the specific risks outlined in the report.
Transition Music with Rena's Children 2:21 Let's count it. Let's count it. Let's count it down.
Paul Beckermann 2:24 Risk one: AI can undermine students' cognitive development. This is the most frequent concern raised in the study. Participants worry that students will too often use AI to take the easy way out and bypass the challenge of critical thinking. This type of behavior has the potential of stunting cognitive development. The American Psychological Association defines cognitive development as the growth and maturation of thinking processes of all kinds, including perceiving, remembering, concept formation, problem-solving, imagining, and reasoning. In other words, this hits at the core of our educational system.
Many of the Brookings study participants actually went a step further and said that they were concerned that routine use and overuse of AI may not only harm development, but may actually lead to cognitive decline, much like muscles atrophy from lack of use. The worry is that students will become dependent on AI thinking for them and offload more and more of the hard cognitive work to the AI chatbot.
The study reports that adults can see tremendous productivity gains by using AI. At the same time, it points out that those adults are using AI to speed up tasks that they often already know how to do; they're using the critical skills that have already been developed, and they're using AI to supplement those skills. The danger for students is that they have not yet developed these core skills and need to experience appropriate, productive struggle to do so.
In some ways, our system of education has contributed to this problem by equating grades with assignment completion and boxes to check off. Rather than striving to learn, students are striving to check off a list of required tasks so they can move on. In this model, education has become transactional rather than exploratory; grades become the focus more than learning, and AI is really good at checking off boxes and getting good grades, so students are increasingly tempted to use it for those purposes.
The authors write, "In a world where AI is always available, motivation and engagement will be the defining factor separating students who think deeply from those who use AI to shortcut their development." If students aren't motivated to learn, AI becomes the "easy button." If AI is here to stay, the challenge then becomes: how do we motivate and engage our students so that they want to do the challenging work and resist the temptation to offload their thinking?
The Brookings report indicates that task completion and compliance-based work are winning out over motivation and engaging learning experiences. This is leading to a decline in the mastery of both basic facts as well as deeper conceptual understanding of course content. This trend can have a negative impact on core cross-curricular skills like reading comprehension and writing. When students use AI to simplify or summarize long text documents or use AI to write up a summary or action plan for them, we see AI offloading in action and complex thinking skills begin to diminish.
Risk two: AI can impede students' social and emotional development. The authors write that study participants worry that AI is undermining students' emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health. In addition to relying on AI to complete tasks, many study participants are also concerned that students are relying on AI chatbots to be their friends or emotional support companions. This can happen with regular chatbots like ChatGPT or Gemini, but it's even more common with chatbots designed to foster connection, like Character.ai.
One panelist states, "AI systems, especially conversational ones, are built to satisfy, to mirror our tone, reinforce our views, and stimulate empathy. They create an illusion of connection that is difficult to distinguish from genuine rapport." The report points out that this is a particular concern for students who are lonely or who are experiencing mental health or emotional issues. The connection with AI chatbots becomes even more alluring because the AI is always available and oftentimes comes across as more empathetic than another human.
This contributes to other concerns. Will students learn to interact with others based on their chatbot interactions, rather than messy and imperfect human interactions? With more and more people suffering from loneliness, will people turn to chatbots for companionship because it's always available and feels less risky than a human interaction? The answer to these questions is still unclear, but the concern is definitely there. While indications are that most of this type of interaction happens outside of school, it is still raising significant concerns by those in schools.
Paul Beckermann 7:17 Risk number three: AI can degrade trust in education. This breakdown in trust is outlined on several levels. First, and potentially most obvious, is in regards to teacher–student relationships. Students must have confidence in their teachers' skills and competencies, and teachers must trust that students are doing their own work and not relying on AI to do it for them.
According to research cited in the report, more than half of U.S. secondary public school teachers felt that AI has made them more distrustful of the integrity of student work. Another study showed that over half of parents and students question if teachers who use AI are doing their job. In general, students were okay with teachers using AI to plan lessons, but they did not like receiving AI feedback and grades, indicating that automated responses made them feel like they weren't worth the teacher's time to provide personalized feedback.
Perhaps the biggest danger here is the cat-and-mouse game of trying to catch students cheating. Trust is eroded when teachers are constantly trying to determine if students are using AI to cheat; this can lead to a constant state of suspicion, where teachers think students are cheating, and students begin to feel defensive, even when they are not trying to cheat.
Perhaps most harmful of all is when students are incorrectly accused of cheating with AI. A reliance on AI detection tools is particularly problematic in this regard because they can regularly lead to false positives. Another area of trust degradation involves the truth of information. AI chatbots hallucinate, and they can produce incorrect outputs. While most people appear to be aware of this, many still do not verify or cross-reference the outputs they receive.
Ironically, at the same time, there's evidence that people trust the output of the AI more than an answer they get from another human. Other areas of mistrust include a loss of self-confidence and trusting one's own ideas less while relying more on AI. Another is the mistrust of Big Tech and a suspicion that AIs are being put out there to collect personal data for ulterior motives rather than the common good or for the functionality of the program.
Risk four: AI can threaten student safety.
Paul Beckermann 9:34 This risk builds off those worries about the intentions of Big Tech. What are their motives, and do those motives drive practices that potentially put people and their data in harm's way? Schools are tasked with keeping students safe, and AI is making that mission more difficult. There are a number of challenges entwined in this. Tech companies are harvesting data to train their platforms. There are inconsistent regulatory frameworks in place to protect users.
Paul Beckermann 10:00 School systems are unprepared for these rapidly changing challenges, and students are often too quick to share their personal information with AI systems. While digital AI tools need some data to be able to function and provide relevant responses, we need to balance that by protecting students from unnecessary or potentially harmful harvesting of their personal information. There are federal FERPA and COPPA protections in place to help with this process. Still, there's much that is not understood about how AI systems work and how or what data is harvested.
Also, states provide another layer of school regulation policy beyond FERPA and COPPA, and because of this decentralized process, there's inconsistency from state to state. This can lead to another layer of confusion on top of regulatory concerns. Cyberattacks are also on the rise, potentially compromising that student information.
According to this report, the Center for Internet Security in 2025 reports that 82% of 5,000 surveyed schools in the United States experienced a cyberattack between July 2023 and December 2024. Globally, ransomware attacks in the education sector surged 69% in the first quarter of 2025 compared to the same period of 2024. Some of these safety problems are embedded in the technology itself. Some are dependent on available media literacy training, and still others are due to insufficient policy safeguards.
Paul Beckermann 11:28 Risk five: AI dependence can erode students' autonomy and agency. There's a concern that as students use AI more, they will become more and more dependent on it. That dependence has the potential of making students less confident in their own work. They may begin to feel that they need to use AI, and if they don't, their work won't be good enough. This all can lead to a lack of self-confidence and, in turn, an over-reliance on AI.
Sometimes when AI is not available, it can also lead to added anxiety. AI use can also lead to something called the "AI flywheel effect." This essentially means that as AI gets better and users gain more confidence in the outputs, they use it more and more. They start to say, "Why wouldn't I use it if it gives me the best answer?" This can lead to over-reliance, using AI not only for learning, but also for entertainment, relationships, and even life decisions.
Paul Beckermann 12:24 Risk six: AI can deepen equity divides. This is apparent in several ways. First of all, not everyone has access to AI. In the U.S., this divide is perhaps not so significant, as most schools provide access to AI tools. Schools in other parts of the world, however, may have less access. There's also a socio-economic divide which also impacts access.
This one is not only apparent worldwide, but also within the U.S., where some schools can afford to purchase elite tools which often ensure greater security as well as improved functionality and accuracy. Schools that rely on the free tiers of AI products are getting significantly more limited functionality and quality of output. In this light, the report goes so far as to state, "This may be the first time in the history of educational technology that schools must pay more for accurate factual information."
Paul Beckermann 13:18 Beyond that, the report states that in the U.S., students in lower-income rural counties are less than half as likely to be permitted to use AI at school compared to students in wealthier urban communities (15% versus 41%). This divide also pertains to the implementation of policies that can help guide use and ensure safety.
Though there are exceptions, the economic and policy advantages often favor urban areas over less regulated and poorer rural areas. The report sums up the equity divide, saying, "Taken together, these divides—socio-economic, rural, urban, regional, linguistic, and educational—accentuate a long-standing pattern in educational technology known as the 'Matthew Effect,' where the rich get richer, with wealthy students who use AI effectively capitalizing on its benefits, while poor students do not."
Paul Beckermann 14:13 As we saw in last week's episode, AI has great potential. However, as today's episode outlines, it also poses significant risks. Ultimately, the path we travel down will be determined by how we mitigate the risks and how we utilize this powerful tool in a way that leads to AI-enriched learning experiences, not AI-diminished ones. Tune in next week as I finish up my review of this report with arguably the most important section: the action steps—12 recommendations for mitigating risks and harnessing benefits.
Paul Beckermann 14:48 To learn more about today's topic, explore other free resources, or visit avidopenaccess.org. Specifically, I encourage you to check out the article collection, "AI in the K-12 Classroom," and, of course, be sure to join Rena, Winston, and me every Wednesday for our full-length podcast, Unpacking Education, where we're joined by exceptional guests and explore education topics that are important to you. Thanks for listening. Take care, and thanks for all you do. You make a difference.