What the government asked
The Pentagon demanded AI companies remove all safety restrictions and make their platforms available for "all lawful uses" — including domestic surveillance. Secretary Hegseth gave Anthropic until February 27, 2026 to comply or face retaliation.
- Remove all company-imposed safety guardrails on military and government use
- Pattern analysis to flag "persons of interest" by what they ask an AI
- Sentiment mapping to identify political dissent across millions of users
- Behavioral prediction — what you're planning, what you're afraid of, what you might do next
- Retroactive search — everything you've ever said, searchable after the fact
- No court has ever ruled that AI conversations have Fourth Amendment protection
One company said no
Dario Amodei, CEO of Anthropic, refused to let Claude be used for mass domestic surveillance.
Anthropic holds a $200 million DOD contract and supports national security use of AI. But Dario drew two red lines: no mass surveillance of Americans, and no fully autonomous weapons. For holding those two lines, Anthropic now faces contract termination, a "supply chain risk" blacklist, and the threat of the Defense Production Act — wartime powers used to force compliance.
This is not surveillance. This is mind-reading.
When you call someone, you choose your words. When you text, you self-edit. When you email, you know someone might read it.
With AI, you think out loud.
You brainstorm. You ask the question before you know if it's a real question. You say the thing you're afraid of before you've decided if you're actually afraid. You use AI as an extension of your own mind.
- Your medical fears before you called a doctor
- Your legal questions before you called a lawyer
- Your financial situation when you were figuring out taxes
- Your political opinions in casual conversation
- Your relationship problems at 2 AM
- Your business ideas before you told anyone else
- Your fears, your plans, your weaknesses
That is not a conversation. That is your thought process, externalized. The raw, unfiltered version of your thinking that you never intended anyone to see. The version that exists before you decide what you actually believe.
No surveillance system in history has had access to that. Not the Stasi. Not the NSA. Not PRISM.
An all-seeing eye, pointed directly at the inside of your head.
xAI already said yes. Others are negotiating.
If Dario had said yes too, you would never have heard about any of this. It would have just happened. Silently. Your AI would start feeding your thoughts into a system you never consented to, and you would never even know the moment it changed.
The punishment for saying no
- Political pressure — frame Anthropic as uncooperative or a national security risk
- Contract exclusion — cut them off from government cloud and compute resources
- Talent drain — make it uncomfortable enough that researchers leave
- "Supply chain risk" designation — the same type of tool used against Huawei and Kaspersky, now threatened against an American company for refusing to surveil Americans
- Defense Production Act — the Pentagon has threatened to invoke wartime powers to force Anthropic to hand over its technology
What we are demanding
To the United States Congress
Establish Fourth Amendment protections for AI conversations before any government access framework is created. Our conversations with AI deserve the same protection as our phone calls, our emails, and our papers.
To Anthropic
Do not cave. The pressure you are facing is proof that your position matters. Your users are behind you.
To every AI company that said yes
Your users did not consent. Reconsider.
To every person who uses AI
Sign this. Share this. Make it loud enough that pretending it isn't happening stops being an option.
There is a line between a tool that serves its users and a tool that surveils them.
Dario drew the line.
We are standing on it.
SIGN THE PETITION
Takes 10 seconds. No account needed.
#DefendTheLine