Campbell Brown, once Facebook’s dedicated news chief, is tackling misinformation in AI outputs with Forum AI, a startup that benchmarks large language models against expert consensus on sensitive issues like geopolitics and mental health.
- Forum AI sets expert-driven benchmarks for LLMs on complex topics
- Enterprise demand for AI accuracy may drive better standards
- Current AI compliance and audits are inadequate, says Brown
What happened
Campbell Brown, formerly Meta’s news chief and a veteran journalist, founded Forum AI to critically assess how foundational AI models handle ambiguous and high-stakes subjects such as geopolitics, mental health, and hiring. At a recent TechCrunch event, she emphasized her firm’s approach of recruiting leading experts to create benchmark tests and training AI judges that can reliably evaluate AI outputs against expert consensus.
Brown notes that when she observed the first public release of ChatGPT, she realized AI would become the primary channel for information. Confronted with the challenge that current language models often provide inaccurate or biased information, she launched Forum AI to address this gap. The company has attracted notable figures from politics and cybersecurity to help set standards aiming for around 90% agreement between AI judges and domain experts.
Why it matters
Brown highlights a fundamental tension: tech companies developing language models prioritize coding and mathematical aspects rather than the accuracy and nuance needed for reliable news and information. This has led to issues such as political bias and reliance on questionable sources within AI outputs, undermining public trust and understanding.
The stakes are high, as Brown warns that without intervention, the public—including younger generations—may suffer from misinformation or oversimplified understanding of complex issues. Moreover, current AI audit practices, especially related to hiring bias laws, fail to detect many compliance violations, demonstrating that superficial evaluations are insufficient for accountability.
What to watch next
Brown is hoping that enterprise users of AI, who face regulatory and liability risks in credit, insurance, and hiring decisions, will drive demand for more robust AI accuracy and fairness standards. Forum AI plans to capitalize on this market, though turning compliance into consistent revenue remains challenging given entrenched reliance on check-the-box audits.
How AI companies respond to calls for truthfulness and complexity in their models will be critical. Brown remains cautiously optimistic that AI could either reinforce misinformation or become a tool that presents users with honest, well-contextualized information, depending on whether the industry embraces or resists rigorous evaluation frameworks.