We earn commissions when you shop through links on this site — at no extra cost to you. Learn more

Back to all essays
Own Your Tech

One Rack Is a Cloud

·10 min read
George Pu
George Pu$10M+ Portfolio

28 · Toronto · Building to own for 30+ years

One Rack Is a Cloud

Most AI founders don't know there's a third option.

They know AWS.

They know "rent a server from some cheap European hosting company."

They've never heard the word colocation, and the people selling it haven't bothered to tell them.

Here's the thing.

It's the same silicon.

In the same kind of building.

Sometimes literally the same building Amazon rents from.

Different logo on the cage.

A bill that's roughly half the size at the spend levels where it matters.

I've spent the last month figuring out how to put a GPU rack in a data center.

I'm based in Toronto, so my examples are Canadian — but the math, the buyer playbook, and most of the trade-offs apply almost identically whether you're in Austin, Brooklyn, or London.

I'll flag the parts where geography actually matters.

I'm not running my rack yet. Site visits happen in Q3.

Hardware goes online late 2026 or early 2027.

The receipts post comes later. This piece is the map.

If you've ever stared at a five-figure AWS bill and wondered if there's another way, this is for you.

What colocation actually is

Forget the jargon for a second.

A data center is just a big building optimized for two things:

Keeping computers running, and keeping them cool.

That's it.

It's a warehouse with very expensive air conditioning and very loud fans.

Companies like Amazon, Google, and Microsoft own a bunch of these buildings.

They fill them with their own hardware, then rent out slices of that hardware to you by the hour.

That's "the cloud."

You're a tenant on someone else's computer.

Colocation is the third option most people miss.

You walk into the same kind of building.

You rent a tall metal cabinet — they call it a "rack," it's about the height of a fridge.

You buy your own computers.

You install them in the cabinet.

You manage them yourself.

The building gives you four things: floor space, electricity, cooling, and a network port to plug into.

That's the whole product. There's no magic.

The bill is split between the hardware (yours, you bought it) and the building (theirs, you rent the space).

Add the two together and that's your monthly cost.

When founders ask me "wait, does that mean I'm just buying my own servers?" — yes. That's exactly what it means.

The thing that sounds old-fashioned is the thing that's now cheaper than the cloud at the right scale.

The math

Let me show you with real numbers.

A small AI shop's rack today, with 4 NVIDIA L40S GPUs (these are the workhorse cards for inference and small-to-medium training):

  • 4x L40S GPUs: ~$40K
  • A server beefy enough to run them, with 256GB of RAM: ~$15K
  • 50TB of fast storage: ~$8K
  • The networking gear to plug it into the internet: ~$5K

Total hardware cost: roughly $65-70K.

You amortize that over four years (because that's how long the hardware lasts before it's obsolete), and you're at about $1,400/month.

Now add the building cost.

The colo facility charges you for floor space, electricity, a wire to the internet, and bandwidth.

In a decent facility — Canadian, US, doesn't really matter for the order of magnitude — that runs about $1,000-2,000/month for a single GPU rack.

Your all-in cost: $2,500-3,500 a month.

Same workload on AWS, on the closest equivalent instance, in any North American region?

About $4,500/month if you commit for a year. About $7,660/month if you don't.

That's roughly 40% cheaper than AWS reserved, 60% cheaper than AWS on-demand.

Same silicon. Same kind of building.

Sometimes literally the same building, with a different logo on the cage.

The break-even sits at roughly $5,000-10,000/month of cloud spend.

Below that, AWS's margin is fair pay for not thinking about hardware.

Above that, you're paying a margin larger than the cost of doing it yourself.

That's the whole pitch. Everything else is implementation detail.

Power is the real product

Here's something most founders learn the hard way.

A normal web hosting rack — like the kind that runs a Shopify store — pulls about 4-7 kilowatts. Roughly the same as a few electric heaters running at once.

A modern GPU rack pulls 10-30 kW. Sometimes more.

Big H100 deployments can hit 50+ kW.

Most colocation contracts assume the lower number.

The advertised rate quietly assumes you're a normal customer running normal hardware.

The moment you say "I want to run GPUs," everything in the quote shifts.

This is the thing nobody mentions until you've already signed.

Power is what a data center actually sells.

Floor space is almost free — there's lots of it.

The expensive parts are the electricity itself, and the cooling that has to work harder when you're pulling 30 kW out of one cabinet instead of 5.

So when you're shopping for colo, the real question isn't "how much does a rack cost?"

It's "how many kilowatts can this facility actually deliver to my rack, and what does it charge per kilowatt-hour?"

A case study: the Quebec story just changed

Every region has its own version of this story.

Here's the one I'm watching most closely, because it's where I'm building. The lesson generalizes.

The Canadian colo pitch has always been:

Quebec has the cheapest power in North America.

99% of it comes from hydro dams.

There's tons of capacity. Industrial customers pay around $0.05-0.06/kWh, a fraction of what they'd pay in Virginia or Texas.

That's the structural arbitrage that built OVH's Beauharnois campus, AWS's Montreal region, and a long list of others.

But in February 2026, Hydro-Québec proposed a brand new rate, aimed specifically at data centers.

Any facility consuming more than 5 megawatts (we'll come back to what that means in a second) gets charged about 13¢/kWh.

Roughly double the old rate. Crypto operations got hit even harder, at 19.5¢.

The new rates are expected to take effect in the second half of 2026, pending regulatory approval.

The thinking, as I read it:

Quebec has decided that hosting US hyperscalers at industrial rates is no longer a fair trade for a finite, low-carbon resource. So they're repricing.

Now here's the part that matters for founders.

5 megawatts is enormous. A single founder rack pulls 5-30 kilowatts.

That's a thousandth of the threshold.

The new tariff doesn't apply to you.

It applies to multi-million-dollar deployments operated by companies with more lawyers than engineers.

So the practical effect is this: Quebec just made the cost of hyperscale operation in the province meaningfully worse, while leaving the small operator's economics basically intact.

That's not a deliberate founder-friendly policy. It's a side effect.

But it's a side effect worth noticing. The big guys just got a worse deal. You didn't.

If you're in the US, watch your own state. Virginia, Texas, Washington, Oregon — every jurisdiction with cheap-power data center clusters is having some version of this conversation.

The political weather is changing. The carve-outs at the bottom of the market are usually where founders quietly win.

Who actually needs this

Three buckets. Honestly.

Under $3,000/month on cloud compute?

Stay where you are.

The math doesn't work.

The ops burden isn't worth it.

The cloud's flexibility is a real product, and you're correctly buying it.

Between $3,000 and $10,000/month?

Get a quote, but don't necessarily move.

The math is borderline.

If your workload is predictable — same model running 24/7, steady inference traffic — colo starts looking attractive.

If it's spiky — training runs that finish in days, then idle for two weeks — public cloud is still the right call.

Utilization decides, not the size of your bill.

Want the full playbook? I wrote a free 350+ page book on building without VC.
Read the free book·Online, free

$10,000+/month on predictable workloads?

You're paying margin, not buying compute.

At this point, not getting a quote is almost professional negligence.

Whether you actually move depends on your ops capacity and risk tolerance.

But the conversation has to happen.

The fourth bucket: founders who want to own the stack

There's a fourth bucket the spreadsheet doesn't capture.

Some founders specifically want to own their infrastructure.

Not because the math works. Because the product changes.

Sovereignty. Customer trust. Regulatory positioning.

The ability to tell an enterprise buyer "your data sits on hardware I own, in a building I can walk to, under [Canadian / US / EU] jurisdiction" — instead of "we're a tenant in someone else's cloud, and that cloud answers to a foreign government."

For this bucket, the math doesn't have to work cleanly.

Owning the metal is the product.

The colo bill is cost-of-goods, not overhead.

A 30% saving over AWS is a nice bonus, not the reason for the move.

The reason is structural. Every layer of someone else's stack is a layer of someone else's leverage over your business.

That's where my play sits.

It's also where, I think, more AI founders should be looking than currently are.

The customers who care about sovereign infrastructure aren't theoretical.

Public sector buyers care. Regulated industries care.

And an increasing number of enterprises who've watched the geopolitics of the last 18 months don't want their workloads sitting on whichever cloud their adversary's government can pressure.

That's a real buyer pool. And it's growing in every market I can see.

Why most who could, won't

The math works. Most companies still don't move.

The reasons are real, and you should know them.

Ops burden.

Someone is on-call when a hard drive fails at 3 AM.

Someone has to physically reseat a cable when the network port flakes out.

The facility provides "remote hands" services for an extra fee — meaning a technician who'll plug in cables for you — but that's not the same as having someone on your team who actually understands your stack.

(Caveat: if your rack is at a downtown carrier hotel and you live or work in that downtown, that emergency drive is a fifteen-minute walk.

The ops objection partially evaporates for the specific founders who can walk to their own hardware. More on that below.)

Accountability.

When AWS goes down, it's AWS's problem.

When your rack goes down, it's your rack.

That's part of what cloud margin buys — the ability to point at someone else when something breaks. Most founders correctly value this.

Capital structure.

Cloud is operating expense — show up monthly on the credit card.

Colo is capital expense (the hardware) plus operating expense (the colo bill).

Shifting from a $5K/month AWS bill to a $70K hardware purchase plus an ongoing colo lease changes the conversation with investors, even if it saves money over three years.

Forecasting risk.

You buy hardware for today's workload. Your needs change.

Now you're stuck with the wrong configuration.

The cloud lets you reshape your stack in an afternoon. Colo locks you into the depreciation cycle.

None of these are theoretical. They're why a 30-50% cost saving doesn't move most operators.

It's a real trade between margin and flexibility. Most founders are correctly trading flexibility for margin until their workload matures.

The map (with my Canadian bias)

Colo splits into two physical realities.

They're for different buyers, and the pattern repeats almost exactly in every market.

Downtown carrier hotels.

"Carrier hotel" is just industry slang for a building where most of the country's internet connections meet.

Every major market has one or two.

In NYC it's 60 Hudson Street. In LA it's One Wilshire. In Seattle it's the Westin Building. In Chicago it's 350 East Cermak. In Frankfurt it's Interxion FRA. In London it's Telehouse North.

In Canada, the big one is 151 Front Street West in Toronto — a few blocks from the TD Centre, walking distance from most downtown offices.

It's where Cologix runs TOR1, where Equinix runs TR1 across multiple floors, where Digital Realty has YYZ12, and where a long list of smaller providers operate.

Cologix's TOR2 and TOR3 sit a short walk away at 905 King Street West.

Equinix TR4 at 100 Wellington Street West sits inside the financial district proper.

Montreal has the same pattern at 1250 René-Lévesque West, where Cologix MTL3 hosts the Montreal Internet Exchange.

The thing nobody mentions: these are office buildings.

With locked doors, badge readers, and very loud HVAC. They're walkable from most founders' offices.

If something breaks, you can be on-site before the Uber clears the block.

The mental model of "data center in a field outside Quincy, Washington" is wrong for these.

Hyperscale corridors.

This is where the cliché image of a data center comes from.

Northern Virginia (Ashburn / Loudoun County) is the global capital — something like 70% of internet traffic touches a building in that one county.

Beauharnois, just outside Montreal, is where OVH runs its BHS campus — the building sits 300 meters from a Hydro-Québec dam, and the dam supplies the power directly.

Quincy in Washington State, central Oregon, the Texas Triangle, Iowa cornfields.

Cheaper power per kilowatt, more space, less network density.

You're not driving there.

This is where the new 13¢ Quebec tariff will hit hardest, since this is where the multi-megawatt deployments live — and the equivalent fights are starting to play out in Virginia and Texas right now.

If I were starting today, my default for a first rack is downtown.

In Toronto: Cologix TOR1 at 151 Front, or one of the Equinix Toronto sites.

In NYC: 60 Hudson. In Seattle: the Westin Building.

The premium is worth it while you're still learning what you actually need.

The network density at a carrier hotel is genuinely useful if you care about peering with other networks.

The corridor play comes later, when you're scaling and the power bill starts to dwarf everything else.

A note on the buying process.

None of these facilities advertise to founders.

They sell to enterprise procurement teams. You'll have to email sales.

The first quote will assume you're a much bigger buyer than you are.

That's a feature of the market, not a problem with you.

Push back, ask for the actual rack-and-cabinet pricing, and the second quote will be more honest.

The thing nobody tells you

Owning your infrastructure isn't an industrial-scale activity anymore.

It's a downtown-office, walk-to-the-rack, founder-grade activity.

The image of "owning the metal" as something only hyperscalers do — that's the precise mental block keeping founders renting forever.

You're fifteen minutes from your servers.

That's not a hyperscaler. That's a founder with a key card.

This is what infrastructure sovereignty actually looks like at the founder level. Not a manifesto. A rack.

If you've never heard of colocation, the goal is that you now know it exists.

If you're spending real money on cloud and didn't know there was a third option — now you do.

The cloud is a building with hardware in it and someone else's logo on the invoice. Past a certain spend, that logo is no longer the cheapest thing you're paying for.

Share this: