I was at my desk Tuesday when the Bloomberg alert came through.
Bessent and Powell — the Treasury Secretary and the Fed Chair — had called an emergency meeting with every major bank CEO in America.
Not about interest rates. Not about the war. Not about a bank run.
About a single AI model.
Built by a single company.
"Yeah, Sovereignty, Sure"
I run a project called Sovereign Cloud. The whole thesis is that governments and businesses need to own their own AI infrastructure — their data, their compute, their models — or they'll be at the mercy of whoever does.
People nod politely when I say this. It sounds reasonable but abstract. "Yeah, sovereignty, sure."
Tuesday made it concrete.
What Mythos Actually Does
Anthropic released a model called Mythos.
During testing, it found thousands of zero-day vulnerabilities across every major operating system and web browser.
Not theoretical. Real, exploitable flaws.
One in OpenBSD had been hiding for 27 years.
Another in a video processing library called FFmpeg had survived five million automated scans.
Anthropic didn't release it publicly.
Instead, it picked 12 companies — Amazon, Apple, Microsoft, Google, Nvidia, JPMorgan, CrowdStrike, Palo Alto Networks, and a few others — and gave them early access through something called Project Glasswing.
$100 million in credits. The mission: harden critical infrastructure before similar capabilities show up in someone else's hands.
Anthropic decided who gets defended first.
Not Washington. Not any government. A company in San Francisco.
Two Meetings, One Week
Tuesday afternoon, Bessent and Powell sat in a room at Treasury headquarters with the CEOs of Bank of America, Citi, Goldman Sachs, Morgan Stanley, and Wells Fargo.
Jamie Dimon couldn't make it. Everyone else dropped what they were doing.
The message was simple: this model exists, models like it are coming, and you need to be ready.
On Friday — today — the Bank of Canada held its own version. Same meeting. Same urgency. Same reactive posture.
72 hours later. Following America's lead. Discussing a model built entirely outside Canadian jurisdiction, by a company Canada has no relationship with, distributed to partners Canada had no say in choosing.
Canada found out the same way everyone else did. From Bloomberg.
The Company America Is Trying to Destroy
Here's where it gets surreal.
Anthropic is currently being sued by the United States government.
In February, the Pentagon designated Anthropic a supply-chain risk to national security. That label had previously been reserved for foreign adversaries.
The reason? Anthropic refused to let Claude be used for autonomous weapons or mass surveillance of American citizens.
President Trump ordered every federal agency to stop using Claude.
Defense Secretary Hegseth called the company out by name.
The Department of Defense argued that a private company holding a "kill switch" over military technology was an unacceptable risk.
Anthropic sued. A federal judge in San Francisco blocked the ban.
Her words: "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur for expressing disagreement with the government."
Two days before the bank CEO meeting, an appeals court in DC declined to pause the Pentagon's designation. The case continues. Oral arguments are set for May.
So on Tuesday, the two most powerful financial regulators in the United States sat down with Wall Street's most important executives — because of a model built by a company the US government is actively trying to destroy.
Surrender Dressed as Strategy
And then it got worse.
Hours after the meetings, Bloomberg reported that the Trump administration is now encouraging banks to test Mythos internally.
If you're finding this useful, I send essays like this 2-3x per week.
·No spam
To use it. To find vulnerabilities in their own systems with it.
Read that again.
The same government that labeled Anthropic a national security threat in February is now asking the American financial system to depend on it for defense.
In February: "This company is a danger to national security."
In April: "Please use this company's product to protect national security."
That's not a policy evolution. That's a capitulation.
They need Anthropic more than they can afford to punish it.
The View from Ottawa
I keep thinking about what this looks like from Ottawa.
I'm based in Toronto. I work with the Canadian government on AI infrastructure questions.
I've spent months making the case that Canada needs sovereign compute, sovereign models, sovereign capability.
And this week, the Bank of Canada held an emergency meeting about a threat it didn't see coming, built by a company it has no leverage over, distributed to partners it wasn't consulted on, using infrastructure it doesn't own.
Canada wasn't at the table. Canada was in the audience.
That's not a cybersecurity problem. That's a sovereignty problem.
The Story Nobody Is Writing
Everyone is going to write about the cyber risk angle this week. "AI can hack banks." "Regulators scramble." "Is your money safe?"
That's the surface.
The real story is simpler and scarier.
One private company — not a government, not a military, not an international body — built something that forced two central banks to convene emergency meetings within the same week.
That company chose who gets access. That company set the terms. That company decided the timeline.
Anthropic has no treaty obligations. No democratic accountability. No mandate from any electorate.
And yet this week it had more influence over the global financial security agenda than most countries on earth.
What About Next Time?
I don't think Anthropic did anything wrong here.
They limited access. They briefed officials beforehand. They committed $100 million to defensive work.
By most accounts, this was a responsible rollout.
But that's the point.
This time it was a responsible company making careful choices. What about next time? What about the model after Mythos, from a company with less restraint, released without a Project Glasswing, without advance notice, without a guest list?
The governments that own their own AI capability — the compute, the talent, the models — will be the ones setting the terms.
They'll find their own vulnerabilities. They'll choose their own timelines. They'll sit at the table instead of finding out from a push notification.
The governments that don't will keep holding reactive meetings 72 hours after the fact.
One company summoned two central banks this week.
If that doesn't make the case for sovereign AI, nothing will.

