Why the US Government Can’t Be Trusted with AI Surveillance
Mike Masnick, founder and editor of Techdirt, has spent over two decades dissecting the intersection of technology, law, and civil liberties. His work has exposed how governments and corporations weaponise legal ambiguity to erode privacy, from the NSAs post-9/11 mass surveillance programs to today's battles over AI ethics. In a recent episode of Decoder with Nilay Patel, Masnick minced no words: the Pentagons clash with Anthropic isn't just another contract dispute. It's a textbook example of how the U.S. government, time and again, proves it cannot be trusted with unchecked surveillance powers, especially when advanced AI is involved.
Anthropic's Battle: A Recap
Anthropic, the AI lab behind Claude, is now in open conflict with the US Department of War. The company refused to remove its red lines banning mass surveillance of Americans and lethal autonomous weapons - positions that led Defense Secretary Pete Hegseth to blacklist Anthropic as a "supply chain risk to national security" and threaten to invoke the Defense Production Act.
As Masnick put it:
"It's not that anyone is worried about the NSA looking through your Claude usage. Its about them going out and getting third-party data from Amazon or more likely the sort of sneaky, hidden data brokers that serve ads on your phones and know your location and your interests and things like that. And then feeding that into a system that Claude would then work on. Thats what Anthropic really didn't want to be a part of."
The Pentagon's demand was clear: permission to feed Claude third-party data (location histories, browsing records, credit card transactions) collected by shadowy data brokers, all under the guise of "lawful" use.
The Governments Playbook: "Well Define Target Later"
Masnick's core argument is that the U.S. government has a long, bipartisan history of redefining words to justify surveillance overreach. The NSA's post-Snowden revelations showed how legalistic wordplay - like reinterpreting "target" or "collection" - enabled mass data harvesting. The Trump administration, he notes, has dropped even the pretense of subtlety:
"There's just too much history of government lawyers twisting the interpretations of simple words like target to expand surveillance in complicated ways... But theres nothing subtle or sophisticated about policymaking in the Trump era. With Anthropic, were having a very loud, very public debate about technology and surveillance in real time."
The Pentagon's negotiations with Anthropic followed this script. Officials pushed for access to unclassified commercial data, arguing it would only be used for "lawful" purposes. But as Masnick highlights:
"The history of the NSA and mass surveillance in America proves we cant trust the Pentagon to follow the law when it comes to private data collection."
Anthropic's counteroffer of restricting use to classified intelligence under FISA was rejected. The Pentagon wanted more.
OpenAIs Caveat: "Any Lawful Use"
While Anthropic held firm, OpenAI folded. Its deal with the Pentagon includes the phrase "any lawful use", a clause Masnick and other critics argue is a Trojan horse for surveillance. As he explained:
"The alternative theory... is that [OpenAIs lawyers] knew this, but thought that they could play the same game that the NSA played for a few decades: as long as they say these things and then they say the words, but they dont reveal the actual interpretations, that they could get away with it too."
OpenAI's Sam Altman framed the agreement as aligned with Anthropics red lines. Masnick isnt buying it:
"The law doesnt say what Sam Altman claims it does."
Why This Matters
Anthropic's legal battle isn't just about one company's principles. Its a test of whether AI developers can resist government pressure to normalize mass surveillance. For Masnick, the stakes are clear:
"Recharging a contract of 200 million dollars with the major institutional buyer in the world is not something any startup can do. But the case shows that even the best-capitalized actors face this tension: comply and enable surveillance, or resist and risk being crushed."
For anyone who values privacy, the lesson is stark: when it comes to surveillance, the U.S. governments track record demands skepticism. Anthropics stand is a rare example of pushback. Whether it survives the legal onslaught remains to be seen.
Why Educators Should Care
When AI companies cave to government surveillance demands, they set a dangerous precedent for how student data could be used. Schools and universities already rely on tools from OpenAI, Google, and others to manage learning, assess work, and even monitor behavior. If these platforms are built on partnerships with military surveillance, what protections exist for students’ private thoughts, locations, or biometric data? The Anthropic-Pentagon standoff isn’t just about abstract ethics—it’s about whether we trust corporations to safeguard the next generation’s privacy.
For teachers, the message is clear: demand transparency from edtech providers, or risk complicity in systems that treat young people as data points to be mined. The classroom should be a space for critical thinking, not a training ground for unchecked surveillance.
Further Reading
- The Verge: Anthropic doesnt trust the Pentagon, and neither should you
- The Decoder: Inside the Anthropic-Pentagon breakdown