Big Tech OwnsYour Compute.Here’s Who’sTaking It Back.
While Big Tech races to build ever-larger data centers, 80% of existing GPU capacity sits idle. io.net is betting that the future of AI compute looks nothing like the past.
Big Tech Is Building Fast. Still Not Fast Enough.
The numbers coming out of the hyperscalers are staggering. An estimated $650 billion is being spent on AI data center infrastructure in 2026 alone, with Amazon, Microsoft Azure, and Google Cloud racing to stake out compute real estate across the United States and beyond. Headlines about planned campuses have become routine. So have the headlines about delays.
Grid constraints, community opposition, soaring construction costs, and permitting backlogs have pushed back roughly half of planned US data center openings. The irony is sharp: the industry most loudly declaring a compute shortage is struggling to build its way out of one.
But there is a more uncomfortable truth underneath the construction race. The data centers that already exist are chronically underused. Industry estimates suggest that around 80% of global GPU capacity goes unutilized at any given time. Compute workloads are spiky by nature. A company trains a model, then the chips sit. Inference traffic surges and then falls quiet. The infrastructure built for peak demand idles through the troughs.
“Instead of having to build lots of data centers all over the world constantly, we should be juicing the data centers we have more effectively.”
Jack Collier, CMO, io.netIt is this inefficiency, not just the cost, that io.net was built to address. The company aggregates spare GPU capacity from secondary data centers, mining operations, and consumer-grade hardware, pooling it into a single marketplace that anyone can access. Three providers — AWS, Azure, and Google Cloud — control roughly 70% of global compute. The remaining 30% is fragmented across thousands of secondary operators and consumer hardware. io.net connects that fragmented supply into a single, accessible network.
Under $2 an Hour for an H200. That Is Not a Typo.
The flagship claim io.net makes is cost. H200 GPUs, among the most powerful chips available for AI workloads, are listed on the io.net platform today for under $2 per hour. The same hardware on AWS or Google Cloud runs $25 to $30 per hour. For a startup burning 40 to 60 percent of its operating budget on compute, that difference is not marginal. It is existential.
Leonardo.ai, the AI imaging company recently acquired by Canva, is perhaps io.net’s most prominent case study. The team uses io.net for inference workloads and has credited the cost savings with giving them room to innovate faster. That kind of reference point matters when trying to convince web2 companies that decentralized infrastructure is not an experiment.
And that, according to io.net CMO Jack Collier, is where most of the company’s revenue actually comes from today. “Most of our revenue comes from web2,” he noted, “people who don’t even know that they’re building on crypto rails.” The blockchain layer, in other words, is infrastructure, not identity.
Lock-in is real. Once a business has built its stack on AWS or Azure, the connective tissue runs deep through every service, billing integration, and workflow. Extraction is costly and disruptive. Add to that the narrative pressure from hyperscalers themselves, who have significant marketing budgets dedicated to reinforcing fears of GPU shortages, and the inertia becomes easier to understand. io.net’s answer is to let the price differential speak for itself and build the track record one customer at a time.
When AWS Goes Down, Everything Goes Down. That Is the Problem.
Centralized infrastructure carries a centralized failure mode. When a major cloud provider experiences an outage, the cascade is immediate and broad. Thousands of services, often unrelated to one another, go dark simultaneously because they all share the same dependency.
Decentralized compute inverts this logic. io.net customers can distribute their workloads across GPU clusters in four or five countries simultaneously. If one node fails, traffic reroutes. For global products, this also enables something else: local inference. A company serving customers in Japan can run its models from Japan. Customers in South Africa get inference from South Africa. Latency drops. Performance improves. The infrastructure adapts to geography rather than forcing geography to adapt to infrastructure.
This geographic flexibility, available today across more than 138 countries, is one of io.net’s less-discussed advantages. It quietly solves a problem that hyperscalers solve only expensively and slowly, by building new regional data centers.
The Incentive Dynamic Engine: From Inflation to Utility
Most decentralized physical infrastructure networks, DePIN projects in crypto parlance, share a structural problem. They incentivize suppliers by minting new tokens and distributing them as rewards. When token prices rise, suppliers flood in. When prices fall, they leave. The network’s supply is held hostage to speculation rather than anchored to real demand.
io.net has responded with what it calls the Incentive Dynamic Engine, or IDE, scheduled for full implementation in Q2 2026. The shift is fundamental: instead of paying suppliers a fixed amount of IO tokens each month, suppliers are now compensated in proportion to actual demand on the network. Payments are denominated in USDC-equivalent value of IO, meaning suppliers receive stable dollar-value compensation regardless of token price fluctuations.
Revenue above what is needed to pay suppliers flows into a reserve vault. That vault absorbs volatility. In price downturns it subsidizes supplier rewards. In stronger markets, excess emissions from that vault are burned. io.net has committed to burning at least 50 percent of those excess emissions permanently, meaning the total IO supply contracts over time as the network grows.
| IDE Change | Detail | Status |
|---|---|---|
| Network model | Supply-driven → demand-driven | Q2 2026 |
| Supplier payments | USDC-equivalent IO (stable dollar value) | Q2 2026 |
| Emissions burn | 50% minimum of vault excess | Ongoing |
| Network direction | Inflationary → deflationary over time | By design |
“Tokens aren’t just there as an investment vehicle. They’re there to power a trustless network.”
Jack Collier, CMO, io.netThe result is a tokenomic model where the value of IO is tied directly to the utility of the network it powers, not to sentiment cycles. For anyone evaluating whether a blockchain project is serious, that kind of alignment is among the clearest signals available.
AI Agents That Buy Their Own Compute
One of io.net’s more forward-looking product moves is Agent Cloud, launched in March 2026. The premise is simple and slightly startling: AI agents, which already automate enormous swaths of software work, can now autonomously purchase the compute power they need to run. No human in the loop. No approval workflow.
Agent Cloud is built on a Model Context Protocol library created by io.net. An agent with access to a wallet can query the io.net marketplace, identify the GPU configuration it needs, and complete the purchase automatically. Guard rails prevent runaway spending, with limits on how many devices can be acquired and for how long.
The concept points toward something larger. If AI agents are going to be first-class economic participants, they need infrastructure that is programmatically accessible. Centralized cloud providers require account creation, billing agreements, and human oversight at the procurement layer. A permissionless marketplace, accessible via API and payable in crypto or fiat, removes those friction points entirely.
“Our CEO talks quite passionately about a world where AI agents are being spun up themselves and are able to purchase their own compute power and run entirely autonomously,” Collier said. It is a vision of compute as a commodity that intelligent systems consume on demand, the same way applications consume electricity or bandwidth.
The Demand Curve Only Runs One Direction
The case for decentralized compute rests on a straightforward projection: AI demand will grow faster than centralized infrastructure can be built, and the inefficiency of today’s capacity utilization leaves enormous room for networks that can aggregate and reallocate idle supply. io.net is not alone in making this argument, but it is among the furthest along in proving it with revenue.
From zero to $25 million in annualized revenue, in roughly a year of serious commercial operation, against a global data center market measured in the hundreds of billions, there is a long road ahead. But the trajectory is real, the product is live, and the customers are increasingly the kind of companies who do not think of themselves as crypto users at all.
That quiet expansion — blockchain as invisible infrastructure rather than explicit identity — may be the most durable growth story in the space. Spin up a cluster at io.net in two minutes. No waitlist. No KYC labyrinth. Just compute, available to whoever needs it.
