Elon Musk has made a bold claim about computing in orbit.
Speaking on a podcast with Dwarkesh Patel and John Collison, he said, “Mark my words…In 36 months, probably closer to 30 months, the most economically compelling place to put AI will be space.” Soon after, SpaceX, now merged with xAI, filed a patent with the US Federal Communications Commission for an orbital data centre constellation of up to one million satellites between 310 and 1,200 miles above Earth.
The idea is based on access to constant solar energy and freedom from land and grid limits. Musk has said space based facilities could surpass those on Earth as the most cost effective way to power AI within three years. Sundar Pichai of Google has said orbital data centres could become reality within a decade. Sam Altman has taken a different line, saying, “We’re simply not there yet.”
Rebekah Reed, former NASA associate director and now at Harvard University’s Program on Emerging Technology, Scientific Advancement, and Global Policy, has questioned near term expectations. Writing in the Financial Times, she said, “Treating orbit as a workaround for AI’s current energy-hungry training needs is, as OpenAI co-founder Sam Altman recently put it, ‘ridiculous.’ Orbital data centres are many years, perhaps decades, away.”
What Are The Technical Barriers With Space Data Centres?
Of course, cost is the biggest barrier here. Reed wrote that launch costs would need to fall below $200 per kilogram to make orbital data centres economically viable. She added, “That threshold isn’t expected until the mid-2030s.” SpaceX’s Falcon 9 has already brought launch costs down from about $11,500 per kilogram to around $1,500 per kilogram, according to the Breakthrough Institute. Starship is projected to reach $100 to $200 per kilogram, though that figure is described as optimistic.
Then, there’s the issue of radiation. On Earth, the atmosphere shields chips from high energy particles. In orbit, radiation can cause “bit flips” or permanent circuit damage. The Breakthrough Institute reports that Meta’s training of its Llama 3 model on NVIDIA H100 chips saw 419 unexpected interruptions in 54 days. That happened on Earth. In space, operators would face radiation induced faults on top of normal hardware failures.
The Breakthrough Institute states, “We simply don’t have methods of protecting chips from radiation exposure, maintaining acceptable computing uptimes, and resupplying a facility with new components that are remotely realistic for a large-scale, commercial computing enterprise.” Radiation hardened chips exist, but they lag behind the latest AI chips in performance.
A November 2025 Google publication claimed its Trillium chips could operate for 5 years in orbit. The Breakthrough Institute said this conclusion came from an accelerated test using protons at a single energy level, far narrower than the radiation environment in space. The first test of an AI grade chip in orbit began in November 2025 when Starcloud launched an NVIDIA H100 into space. Results will take years.
More from News
- Why Doesn’t Sundar Pichai Have The Cult Following Of Other Big Tech CEOs, Despite Running A $3 Trillion Company?
- New Reports From Indeed Show That Businesses Simply Do Not Have Time For AI Upskilling
- Why Exactly Are Oil Refineries Being Targeted Amid Middle East-US Conflict?
- Experts Share: Is Cyber Warfare The New Battlefield Of Modern Conflict, And Is The US Prepared?
- What Is Open Banking And Is It The Next Step In Unlocking Billions For The UK’s Economy?
- The Tech Industry Celebrates Progress for Women, But Data Tells Another Story
- As Self-Employment Rises, What Are The Highest Paying Industries For Business Owners In The UK?
- A Chat With Dr Ben Kultys, Founder Of Velcorian on Keeping The Kitchen Safe From Microplastics, Bacteria, And Unnecessary Waste
What About Maintenance And Debris?
On Earth, companies replace chips every 2-3 years. In orbit, that would require launching an entirely new constellation. Old satellites could be directed towards Point Nemo in the Pacific Ocean to burn up on re entry. Controlled reentries carry a very low, but non zero, chance of debris reaching land.
There is also the threat of Kessler syndrome, a chain reaction in which debris from one collision triggers some more crashes. Reed wrote that scaling orbital data centres would exacerbate orbital debris and mess with views of the night sky.
Environmental costs could be higher, too. Research from Saarland University in Germany found that the carbon footprint of space based data centres, covering manufacturing, launch and disposal, could exceed that of terrestrial facilities. The study said, “Results show that, even under optimistic assumptions, in-orbit systems incur significantly higher carbon costs, up to an order of magnitude more than terrestrial equivalents, primarily due to embodied emissions from launch and re-entry.”
Is There Any Case For Computing In Space?
The Breakthrough Institute makes a distinction between large AI training clusters in orbit and smaller scale satellite edge computing. Satellites collect terabytes of raw data but can only transmit a few gigabytes per pass to ground stations. Processing data in orbit and sending down results could make sense.
The Institute ends by saying, “The economics, technological maturity, and tangible market demand for satellite edge computing actually make sense.” That is a narrower vision than Musk’s proposal.
We’re dealing with economic, technical and environmental barriers here. Musk himself admitted on the podcast that “those who have lived in software land don’t realise that they’re about to have a hard lesson in hardware.” That lesson may come sooner on Earth than in space.