In response to mounting regulatory pressure worldwide, Meta has implemented enhanced AI-based age verification tools designed to better detect teenagers and users under 13 on its platforms, Instagram and Facebook. These measures come alongside calls for standardized age checks at the app store level and intensified scrutiny of Meta’s child safety practices.

  • AI estimates user age from activity, profiles, and visual cues without facial recognition.
  • Teen accounts gain automatic restrictions on messaging, content exposure, and features.
  • Meta urges app stores to enforce age verification to support child-safe environments.

What happened

Meta Platforms has introduced new AI-powered systems to more accurately identify teenagers and underage users on Facebook and Instagram. These systems analyze a variety of signals including account activity, social interactions, profile details, and visual indicators in photos and videos to estimate a user’s age. Notably, this technology does not use facial recognition to protect user privacy.

Upon detection, accounts suspected to belong to teenagers are assigned enhanced safeguards like restricted messaging from strangers, limited exposure to sensitive content, and certain feature restrictions. Meta is also streamlining the reporting and review process for underage accounts, combining AI with human oversight to improve the speed and accuracy of enforcement.

Advertising
Reserved for inline-leaderboard

Why it matters

The expansion of AI-based age verification comes amid increasing global focus on protecting minors online. Regulations such as the Digital Services Act in the European Union and legal actions in U.S. states like New Mexico highlight the pressure on tech companies to prevent children under 13 from accessing platforms not designed for them.

Meta’s efforts address both regulatory compliance and parental concerns, as seen in data showing strong public support for mandated age verification at higher industry levels. However, the technology and enforcement also raise important debates around privacy, surveillance, and the reliability of automated age estimation methods.

What to watch next

Meta plans to introduce similar Facebook protections in the UK and the EU by June, while expanding Instagram safeguards in the EU and Brazil and Facebook protections in the U.S. The company’s ongoing child safety trial in New Mexico could have far-reaching implications depending on court rulings around underage user restrictions and required platform adjustments.

Moreover, Meta’s call for app stores to enforce age verification standards signals a push for industry-wide solutions, potentially setting new norms for age assurance that balance user privacy with child safety. Stakeholders will be watching how regulators respond to these measures and whether other tech companies follow suit.

Source assisted: This briefing began from a discovered source item from MediaNama. Open the original source.
How SignalDesk reports: feeds and outside sources are used for discovery. Public briefings are edited to add context, buyer relevance and attribution before they are published. Read the standards

Related briefings