The White House is exploring new ways to vet AI models prior to their public release, including involvement from key intelligence agencies, signaling growing government concern over AI safety. However, the initiative falls short of establishing a truly independent or enforceable safety framework amid concentrated industry power.
- White House proposes intelligence agencies vetting AI models pre-release
- Private sector controls most AI development and computing power
- Current oversight plans lack independence and enforcement mechanisms
What happened
The White House is considering an executive order to establish a working group involving government officials and industry leaders for preliminary vetting of new AI models before they become publicly accessible. Agencies such as the NSA, the Office of the National Cyber Director, and the director of national intelligence could play key roles in this review process. Although this marks a departure from previous administrative policies that dismantled AI safety measures, the proposed framework does not guarantee blocking models deemed unsafe.
This effort builds on a complex backdrop where major AI companies control the majority of global AI computing resources — nearly 80 percent — and house most of the researchers capable of evaluating advanced systems. The administration's policy is also shaped by concerns about global technological competition, particularly with China, which has accelerated consolidation of AI capabilities within private firms.
Why it matters
Relying on corporate discretion to assess AI safety has proven problematic. Examples like Anthropic's decision to withhold its Mythos model highlight that safety judgments depend on company leadership choices rather than systematic regulation. Unlike other sectors such as pharmaceuticals, where independent evaluation by agencies like the FDA is mandatory before public use, AI products currently lack equivalent safeguards and transparent testing regimes.
The planned federal review process is also criticized for insufficient independence, as it is designed in collaboration with the same tech giants whose models it would evaluate. Without clear enforcement authority and with a majority of AI research concentrated in private companies, real accountability and risk management remain elusive challenges. Voluntary frameworks like the Center for AI Standards and Innovation have limited scope and influence, especially since their focus tends to be on narrow national security risks rather than broad public safety.
What to watch next
Observers will be looking for concrete outcomes from the proposed working group, including whether it can establish enforceable safety standards distinct from industry interests. The degree of involvement by intelligence agencies and how they balance national security with broader public concerns about AI harms could shape future regulation models globally.
The evolving dynamics between private AI labs, academia, and the federal government will also be critical. Monitoring attempts to secure greater third-party independent auditing capabilities and transparency in AI model testing will indicate whether real progress toward credible safety oversight is achievable within the current ecosystem.