Back to all essays
George's TakesSignal Log

The Most Important Documents Nobody Reads

·3 min read
George Pu
George Pu$10M+ Portfolio

27 · Toronto · Building businesses to own for 30+ years

The Most Important Documents Nobody Reads

Everyone's watching the stock market.

S&P down 3%. Nvidia earnings. DeepSeek wiped $1 trillion in a day. Software stocks lost $2 trillion in a week. Fed minutes. CPI print. Jobs report.

That's what makes the news. That's what people react to. That's where the attention goes.

Meanwhile, the documents that actually tell you what's coming sit on corporate websites with zero engagement.

Nobody reads them.

I do.

What the Builders Are Saying

Three weeks ago, Anthropic — the company that builds Claude, the AI I use to run a company with 5 people — published their Frontier Safety Roadmap.

It's buried on their website under "Research." No press tour. No viral thread. Just a quietly published document laying out what they think is coming and what they're doing about it.

Here's the line that should have made headlines:

"We believe it is plausible, as soon as early 2027, that our AI systems could fully automate, or otherwise dramatically accelerate, the work of large, top-tier teams of human researchers."

Read that again.

The company building the AI is saying — publicly, on the record — that full automation of elite research teams is 12 months away.

Not a prediction from a Twitter account with a galaxy brain emoji. Not a think tank white paper. The actual lab. The people training the models. They put it in writing.

And almost nobody noticed.

It Gets Wilder

Same document. Different section.

Anthropic describes what amounts to a compute sovereignty hierarchy. They talk about protecting model weights from "the world's best-resourced attackers." They talk about security against nation-state threats. They talk about models that could accelerate weapons development, energy research, and robotics.

They're describing a world where AI models are strategic national assets. Where the security around a language model needs to rival the security around nuclear technology.

That's not a metaphor. That's their actual operational framework. Published February 2026. On their website. For free.

Nobody tweeted about it.

And it's not just Anthropic. Google DeepMind publishes technical papers describing capabilities most people won't encounter for 2-3 years. OpenAI's system cards contain more honest information about AI limitations than any news article ever written about them. Meta's research papers lay out the architecture of models that will reshape entire industries.

All public. All free. All written in a style specifically designed to be ignored by normal people.

Dense. Technical. Corporate. Buried under 40 pages of caveats and legal language.

That's not conspiracy. It's incentives. These companies publish because they have to — for regulators, researchers, their own accountability. They don't publish to go viral.

But the information is there. If you read it.

The Rearview Mirror Problem

Now look at what the rest of the world is paying attention to.

A CEO's earnings call quote taken out of context. Which company laid off how many people this week. Whether the Fed raises rates by 25 or 50 basis points.

These are lagging indicators. They tell you what already happened. They're the rearview mirror.

The roadmaps, the research papers, the technical benchmarks, the safety assessments — those are the windshield. They tell you what's coming.

And almost everyone is driving by looking in the mirror.

The old measurement system — quarterly earnings, stock indices, unemployment rates, GDP growth — was built for an economy that changes slowly. Incrementally. Predictably.

We're not in that economy anymore.

The changes are structural. They're discontinuous. And they're being announced in advance, in documents that nobody reads, by the people who are building them.

What I Actually Do

Every week, I spend a few hours reading things that aren't designed to be read by people like me.

Research papers. Safety reports. Government policy drafts. Patent filings. Technical benchmarks.

I'm not a researcher. But I can read. And reading the primary sources puts you years ahead of reading the coverage of the primary sources — because the coverage strips out the parts that matter most and replaces them with whatever generates clicks.

Then I write about it. In plain language. In short paragraphs. Without the jargon.

Because the gap between "what the labs know" and "what normal people understand" is growing wider every month. And in that gap is where the real risk lives — and the real opportunity.

The Question Nobody's Asking

The most important document published this month wasn't an earnings report.

It was a safety roadmap from a company most people have never heard of, saying that full automation of elite research teams is plausible within a year.

And the most important question isn't "what did the stock market do today?"

It's: what did the people building the future publish today that I haven't read yet?

Almost nobody is asking that question.

That's the edge.