A troubling new report by Microsoft finds that most teenagers are increasingly unable to identify fake videos, images, and audio created using artificial intelligence, even as their use of AI tools continues to grow.
According to the company’s annual global study, released ahead of International Safer Internet Day on Tuesday, only 25% of teenagers correctly identified deepfakes, a sharp decline from 46% last year. At the same time, 91% of respondents reported concern about harms and risks linked to AI use.
2 View gallery


Only 25% of teenagers were able to correctly identify deepfakes
(Photo: Shutterstock)
The findings are based on a survey conducted last summer among 14,797 respondents in 15 countries. Over the past decade, Microsoft has surveyed more than 130,000 people in 37 countries as part of its ongoing research into digital life among both young people and adults.
More connected, less secure
While teenagers reported feeling more connected and productive thanks to technology and AI, they also said they feel less safe in the digital space. Overall exposure to online risks rose significantly, with 64% of teens reporting they experienced at least one online risk in the past year.
The most common threats included hate speech, reported by 35% of respondents, online scams at 29%, and cyberbullying at 23%.
At the same time, the study found signs of increased awareness and response. Seventy-two percent of teens said they spoke with someone after encountering an online risk, and reporting rates rose for the second consecutive year. About 75% said they took protective actions such as blocking users or closing accounts.
Weekly use of generative AI jumped sharply, from 13% in 2023 to 38% in 2025. The most common uses included answering questions at 42%, planning at 41% and improving work efficiency at 37%.
Despite the growing adoption, concerns remain widespread. Among teens surveyed, 91% expressed some level of concern about AI, citing risks such as sexual exploitation or abuse at 78%, AI-based scams at 77% and privacy violations at 70%.
An overwhelming majority of respondents, about 81%, said they expect technology companies to restrict illegal and harmful content. The most requested protections included filtering sexual content and limiting messaging to known contacts.
Microsoft said it continues to strengthen safety mechanisms to address evolving digital risks. The company recently closed applications for the first cohort of its new AI Futures Youth Council, which will include teenagers from the United States and the European Union and provide direct feedback on emerging technologies.
Microsoft also announced a collaboration with the Cyberlite organization on a new research initiative examining how teens ages 13 to 17 use AI-powered companion-style applications, alongside continued development of safety tools across Windows and Xbox platforms.
“The research data illustrate how vital it is to adopt a systemic approach to digital safety, especially in the age of artificial intelligence,” said Noa Gevaon, Microsoft Israel’s director of government relations. “Time and again, we see that the right combination of policy, technology and education is key to strengthening young people’s digital resilience.”


