Big Tech's $1.1 Trillion Cloud Computing Backlog
2 17An anonymous reader shares a report: Amazon, Google, and Microsoft each reported hundreds of billions in RPO (remaining performance obligations) -- signed contracts for cloud computing services that can't yet be filled and haven't yet hit the books. Collectively, the big three cloud providers reported a $1.1 trillion backlog of revenue.
2 comments
neo-mining (Score: 5, Informative)
by noshellswill ( 598066 ) on Friday February 06, 2026 @01:33PM (#65973236)
Cloud /*.ai over-reach in media/data/IP/energy really strikes home to me. The more cloud* tends to consumes ALL resources the more "successful" it becomes . I grew up in Scranton Pa, where coal-mining dominated local culture and consumed/destroyed most other forms of employment. Not just during 1860-->1960, but long after, when underground coal-fires polluted ground-surface and air. Scranton became ... and remains an economic/cultural shell ( it does manufacture 155 mm shells for Ukraine ). I see the grasp of cloud/*.ai following the same meme, but on a nationwide scale. Coal mining raped a gorgeous Lackawanna Valley and tried swallowing the Susquehanna River, leaving nothing behind , but a few wealthy families and toxic waste. Now is the time to ensure cloud / .ai doesn't do the same nationally.
Backlog (Score: 5, Informative)
by lazarus ( 2879 ) on Friday February 06, 2026 @01:59PM (#65973290)
I can't speak to all of the areas that contribute to AI backlog (like capital allocation, systems integration, networking availability, etc). But from a data centre standpoint it is a real struggle. The general timeline to get from requirements to signature on a data centre lease is about three months if all goes well (assuming you are not self-performing). Once that is signed a DC project takes about two years from conception to RFS (ready-for-service) where you can start rolling in racks. A LOT goes into that timeframe: Ordering LLE (Long-Lead Equipment (generators have a 24+ month lead time right now)), securing permits, securing power, securing a EOR (Engineer of Record) and getting the design done, land preparation, construction crews and materials, etc.
And that is all just to get a "powered shell": what we call a building with at least two diverse sources of utility power, gens, transformers, power switch gear, static UPSs (if they are being used), chillers, pumps and piping, etc. You still can't roll in racks until you have a fit-out design (PDUs, RPPs, whips, busway, FWUs (Fan Wall Units) or CRAHs (Computer Room Air Handlers), temp and humidity sensors, etc. It takes about a YEAR from the time you get the fit-out design to the time when you can roll in racks (but it is possible to line up fit-out works with RFS if you get the design early enough).
Everybody is looking for some creative solutions to speed things up, but data centre deployments are complex (I deal with well over 1000 individual requirements per project) and there are a lot of supply chain constraints. Most of the hyperscalers are juggling 20+ simultaneous projects, it's burning people out and there is a lot of churn in the business (making it worse).