“Mark my words: in 36 months, probably closer to 30 months, the most economically compelling place to put AI will be space.”
Elon Musk said that on a podcast last week. He meant it literally.
His $1.25 trillion plan to fold xAI into SpaceX rests on a single audacious claim: that fleets of solar-powered satellites, cooled by the vacuum of space, will soon beat every data centre on Earth for cost.
The vision is coherent. The timeline is not.
Musk’s pitch: Compute leaves Earth
Musk has put the concept of ‘space GPUs’ at the centre of the merger story.
He says the economics will flip fast once you combine three things that SpaceX already has: cheap lift to orbit, a global satellite network, and a new AI model stack built around xAI’s Grok.
In public markets, that kind of claim matters because it reframes the SpaceX investment case. It is no longer just rockets, launches and Starlink subscriptions.
It is a bid to become the infrastructure landlord for the AI era, with Starlink carrying the data and orbital platforms doing at least some of the inference, the step where an AI system answers a user’s question.
The idea is not exclusive to Musk. Google is working with satellite operator Planet on a pilot it calls Project Suncatcher, aimed at putting its tensor processing unit chips in orbit.
Start-ups such as Starcloud and Aetherflux say they can push compute into space faster than terrestrial data centres can be permitted, connected and built.
Invezz spoke with Dr. Ezra Feilden, Starcloud’s co-founder and CTO, who said that the attraction is not only the headline vision but the balance-sheet logic of owning the machinery.
“Vertical integration is a big advantage in this industry. You can optimise your whole stack for your application,” he said.
“Also, you don’t need to pay the profit margins of multiple layers in a supply chain,” Dr. Feilden added.
For Musk, that is the heart of the bet: fewer middlemen, fewer bottlenecks and, eventually, lower all-in cost per unit of AI compute.
The hard part: Heat, mass and radiation
Policy and engineering specialists say the concept is plausible, but the medium-term threat narrative is overcooked.
Jermaine Francisco Gutierrez, a research fellow at the European Space Policy Institute, argues that the “cloud moves to orbit” framing is misleading.
“From a policy and feasibility standpoint, space-based data centres do not look like a credible medium-term threat to terrestrial cloud providers such as AWS and Azure,” Gutierrez told Invezz.
“The more realistic story is not displacement of the hyperscalers but the emergence of a ‘space edge’ tier that complements terrestrial cloud and, in some niche cases, competes with it for specialized workloads,” he added.
His central objection is not ideology. It is physics. Modern data centres are, in his words, as much heat-management and power-delivery systems as they are compute systems.
“A modern data centre is fundamentally a heat management and power delivery system as much as it is a compute system,” Gutierrez said.
“In orbit, you can generate compute, but you cannot rely on convection to remove waste heat, and radiating large amounts of heat requires substantial surface area and careful thermal engineering,” the ESPI research fellow added.
The vacuum point matters. On Earth, you can blow air across hot components and use water or refrigerants to carry heat away.
In orbit, you must dump heat by radiating it off surfaces, which pushes engineers toward large radiator panels and careful thermal engineering.
That raises mass, which raises launch costs, and it makes scaling ‘unforgiving’, as Gutierrez puts it.
“This becomes increasingly unforgiving as you move from small experimental payloads to anything resembling data centre scale (1GW-scale). The concept is not impossible, but scaling it is hard in a way that tends to stretch timelines,” Gutierrez said.
Then there is radiation. Chips in orbit face energetic particles that can flip bits and degrade components. Some hardware can be shielded, but shielding adds weight.
Even if performance holds, the supply chain becomes more specialised as you need space-qualified systems, long-life components and a servicing plan for failures that would be routine in a warehouse full of technicians.
The economics: What would ‘competitive’ take?
If the physics is difficult, the economics is the real gatekeeper.
Terrestrial hyperscalers have spent decades squeezing costs out of power delivery, cooling, maintenance and software orchestration.
They also benefit from scale as tens of billions of dollars of existing data centre footprint, supplier relationships and operational playbook.
Gutierrez argues that space compute must clear several hurdles at once like launch costs must fall sharply, spacecraft manufacturing must resemble industrial production, and equipment must survive longer and perform reliably in a harsher environment.
“We estimate that launch costs must fall to below 400$/kg to approach competitiveness with TDCs,” he said.
Google’s own research reaches a similar conclusion about cost curves, projecting that launch to low-Earth orbit may reach about $200 per kilogram by the mid-2030s if learning-curve effects and high launch cadence hold.
That is where Musk’s timeline collides with the spreadsheets.
A three-year horizon implies that the cost and reliability problems are mostly solved, and that enough hardware can be launched fast enough to matter.
It also implies that demand for AI compute will keep growing at a pace that justifies building a new industrial base in orbit.
Investors should treat those as separate bets. The first is a launch-cost bet on full reusability and very high flight rates.
The second is a manufacturing bet: can satellites carrying serious compute be built like consumer electronics, not like bespoke spacecraft?
The third is a durability bet: can AI accelerators survive radiation and temperature swings without expensive, weight-heavy hardening?
The real market: Edge computing, not cloud replacement
Even if cost parity is far away, smaller, targeted uses may arrive sooner.
Gutierrez says the near-term market is about working closer to the data source, not copying AWS in orbit.
“The near and medium-term market is narrower and more specific than the cloud moves to orbit,” he said.
“The most plausible medium-term use cases are those where doing compute closer to the data source creates unique value. Earth observation is a classic example; Satellites generate enormous amounts of raw imagery and sensor data, and downlink bandwidth is limited and expensive,” the researcher added.
If a satellite can process imagery in orbit and send down only alerts or compressed outputs, it reduces the bottleneck of transmitting huge raw datasets back to Earth.
That matters for disaster monitoring, crop analysis, maritime surveillance and defence intelligence, where minutes can be valuable.
Gutierrez also highlights autonomy and resilience.
“Another medium-term use case is resilience and autonomy for space systems themselves, where onboard or near-orbit compute helps satellites operate with less continuous ground intervention,” he said.
Those are niches, but niches can be lucrative and strategically sensitive.
Gutierrez says the more credible pressure on hyperscalers is not mass displacement, but high-value workloads that develop outside the default ‘hyperscaler gravity’, particularly where sovereignty narratives or mission requirements push buyers toward alternative stacks.
Space-based compute could also become part of a broader hybrid model in which cloud providers orchestrate workloads across terrestrial and non-terrestrial networks.
Gutierrez calls it the emergence of a single ‘control plane’, meaning a software layer that can manage data, automation and inference across different infrastructure.
The IPO lens: One platform, higher expectations
That brings the story back to money.
Musk’s decision to fold xAI into SpaceX will be interpreted as a move to turn a vision into a single investable platform, rather than loosely linked bets, according to Kate Leaman, chief market analyst at AvaTrade.
“For growth-minded investors, the appeal is obvious: the merger shows that AI is not just about smart models but about owning the full stack, from rockets and Starlink through to orbital data centres and leading models like Grok,” Leaman told Invezz.
She argues that the framing “taps directly into the two biggest themes in markets today, AI and space, and bundles them into one name ahead of a landmark IPO.”
For investors hunting the next compounding story, it is the kind of narrative that can “pull in capital even at very high multiples,” she said.
But Leaman’s warning is that the story raises the execution bar and sharpens the valuation debate.
“Folding an early-stage, capital-hungry AI lab into an already richly valued space business means investors are being asked to pay today for a decade of unproven cash flows,” she said.
That is why the most useful way to read Musk’s claim is neither as science fiction nor as imminent disruption.
It is a founder-led attempt to define the next layer of infrastructure, with a credible niche pathway and a highly ambitious schedule.
The post AI moves to space in 30 months, claims Musk, but what does physics say? appeared first on Invezz
