A federal judge has ruled that the Department of Government Efficiency’s method of canceling more than $100 million in National Endowment for the Humanities grants through ChatGPT-assisted screening was unlawful and unconstitutional.
- DOGE used ChatGPT to screen grants for DEI-related content
- Over $100 million in NEH grants were canceled based on AI-generated criteria
- Federal court found the process violated constitutional rights
What happened
The Department of Government Efficiency (DOGE) canceled over 1,400 grants valued at more than $100 million that were administered by the National Endowment for the Humanities (NEH). This action was based on an AI-assisted screening process where staff used ChatGPT to determine whether grant applications related to diversity, equity, and inclusion (DEI). The process involved submitting brief grant descriptions to ChatGPT with instructions to identify DEI-relevant content, which then became the basis for revoking funding.
Testimony revealed that DOGE employees used specific keywords tied to protected traits—such as race, ethnicity, sexuality, and national origin—to systematically disqualify grants. Despite the use of AI, the staff did not define how DEI was understood by the tool nor conducted meaningful independent reviews, effectively blacklisting grants tied to protected characteristics. This led to a large-scale denial of grants supporting projects related to civil rights, the Holocaust, Indigenous culture, and more.
Why it matters
The federal judge ruled DOGE’s use of ChatGPT violated constitutional protections including the First Amendment and the Fifth Amendment’s equal protection clause. The court emphasized that using AI to target grants based on protected characteristics is inherently discriminatory and unlawful, particularly as these topics align directly with Congress’s intended mission for NEH funding.
Furthermore, the ruling underscores the responsibility of government agencies to ensure AI tools are used within legal limits and do not substitute for proper human judgment. The decision clarifies that AI cannot be used as a shield to justify discriminatory government actions and reaffirms that agencies remain accountable for how automated tools impact constitutional rights.
What to watch next
With the court invalidating DOGE’s grant cancellations, the affected funds are expected to be reinstated to support NEH projects that were previously labeled as wasteful. Observers will be monitoring how DOGE and other government bodies revise their grant evaluation protocols to comply with constitutional standards while incorporating emerging AI technologies.
This ruling may also prompt broader scrutiny of AI use in federal decision-making where issues of equity, fairness, and protected characteristics arise. Legislative and judicial responses could follow to set clearer boundaries and best practices for AI deployment in government operations, ensuring transparency and constitutional compliance in automated evaluations.