Key Points
- Recent demonstration: On Nov 2, 2025 SpaceX (太空探索技术公司) launched a satellite carrying a server with NVIDIA H100 GPUs (英伟达) to run on‑orbit inference and real‑time Earth‑observation analysis.
- Starcloud roadmap: the startup targets about 40 megawatts of orbital capacity by the early 2030s, with individual platforms potentially weighing ~100 metric tons (≈100,000 kg).
- Why orbit: proponents cite abundant solar power, natural heat rejection, and resilience to terrestrial hazards as advantages for high‑volume, latency‑tolerant workloads like remote sensing.
- Key risks & economics: technical and policy hurdles include radiation and space debris, latency/bandwidth limits, and launch‑cost sensitivity — Google (谷歌) flags below $200 USD/kg (≈¥1,460 RMB/kg) as a pivotal cost threshold.

Orbital data centers are moving from sci‑fi to tests in low Earth orbit.
Quick summary
As AI training and inference push power needs higher on Earth, major companies are testing compute in orbit.
Companies involved include NVIDIA (Yīngwěidá 英伟达), SpaceX (Tàikōng Tànsuǒ Jìshù Gōngsī 太空探索技术公司), Google (Gǔgē 谷歌) and startups such as Starcloud.
Proponents pitch orbital compute for abundant solar energy, easier heat rejection, and as a way to sidestep terrestrial cooling and infrastructure limits.
Resume Captain
Your AI Career Toolkit:
- AI Resume Optimization
- Custom Cover Letters
- LinkedIn Profile Boost
- Interview Question Prep
- Salary Negotiation Agent

What just happened: Starcloud, NVIDIA and SpaceX — orbital compute in action
On November 2, 2025, SpaceX launched a satellite carrying a server equipped with NVIDIA H100 GPUs.
The rocket was a Falcon 9; the payload is a joint test effort between NVIDIA (Yīngwěidá 英伟达) and Starcloud.
While in orbit, the server will run tests such as real‑time analysis of Earth‑observation data and on‑orbit AI model inference.
Starcloud plans a second‑generation test launch in 2026 and envisions a phased build‑out of an orbital data‑center fleet reaching about 40 megawatts in total capacity by the early 2030s.
Starcloud’s roadmap estimates individual orbital platforms could eventually weigh on the order of 100 metric tons (≈100,000 kg).

Why go to space? The core advantages of orbital compute
Engineers and executives backing on‑orbit compute emphasize a few clear benefits.
-
Abundant solar power. In orbit, platforms can harvest long stretches of sunlight with large solar arrays and reduce reliance on grid electricity.
-
Natural heat rejection. Excess heat can be radiated into deep space as infrared, avoiding large terrestrial cooling systems and water use.
-
Resilience to local hazards. Satellites aren’t affected by floods, earthquakes, or local grid outages that can cripple ground data centers.
Find Top Talent on China's Leading Networks
- Post Across China's Job Sites from $299 / role, or
- Hire Our Recruiting Pros from $799 / role
- Qualified Candidate Bundles
- Lower Hiring Costs by 80%+
- Expert Team Since 2014
Your First Job Post

SpaceX’s take: extreme energy density
SpaceX CEO Elon Musk (Mǎsīkè 马斯克) has framed orbit as the path to vastly greater energy density for compute.
He says to access “millions of times” the energy available per unit area on Earth you need to go to space.
He expects that within four to five years solar‑powered AI satellites could be the cheapest option for some large‑scale compute workloads.
ExpatInvest China
Grow Your RMB in China:
- Invest Your RMB Locally
- Buy & Sell Online in CN¥
- No Lock-In Periods
- English Service & Data
- Start with Only ¥1,000

Google’s “Suncatcher” plan — distributed TPUs in orbit
Google (Gǔgē 谷歌) has announced a concept reportedly called the “Suncatcher” plan to prototype AI compute platforms in orbit.
Google’s proposal envisions constellations of small satellites equipped with TPU processors and high‑throughput optical communications so compute tasks can be spread across orbital nodes.
Google says the first experimental payloads could fly as early as 2027.
Google forecasts that by the mid‑2030s launch costs might fall below $200 USD per kilogram, which is approximately ¥1,460 RMB per kg using a conversion of 1 USD ≈ ¥7.30 RMB.
Under optimistic scale and logistics assumptions, that cost level could make the total expense of sending and operating orbital platforms comparable to the energy‑driven operating costs of an equivalently sized ground data center.

What proponents promise: use cases and architecture
Supporters describe several practical roles for orbital compute.
-
Lower long‑term operating costs for workloads that tolerate higher latency or are naturally distributed, such as Earth observation and scientific processing.
-
New architectures: satellites packed with GPUs and TPUs plus optical inter‑satellite links could form high‑bandwidth orbital clusters for distributed inference.
-
Reduced terrestrial resource use: lower pressure on local power grids, less freshwater for cooling, and fewer land‑use constraints.
-
On‑orbit preprocessing to shrink downlink bandwidth needs by turning raw imagery into actionable outputs before sending data to ground.

Risks, trade‑offs and open questions — what could slow orbital compute down
Experts point to several hard technical and policy challenges that need solving.
-
Radiation and environment. Space increases radiation exposure, which raises error rates and hardware failures; shielding reduces that risk but increases platform mass and launch cost.
-
Space debris. A large number of heavy platforms without good debris mitigation increases collision risks that can cascade and threaten existing services.
-
Solar activity. Solar flares and space weather can disrupt communications and damage electronics on orbit.
-
Latency and bandwidth constraints. Many real‑time and interactive AI applications still need edge or terrestrial compute because orbital round‑trip delay is higher than local alternatives.
-
Regulation and spectrum. Large deployments need spectrum rights, overflight coordination, export control compliance for advanced chips, and international space‑traffic management.

How this likely unfolds: staged testing, niche pilots, then scale if economics align
Expect a stepwise path rather than an overnight shift.
Phase one will be demonstration missions: single satellites or small constellations validating power, cooling, radiation‑tolerant compute and optical links.
Phase two will be limited commercial pilots solving real pain points, for example processing Earth‑observation imagery on orbit to reduce downlink volumes.
Phase three—only if economics, shielding strategies and debris management improve—could be gradual scaling to larger orbital platforms and multi‑megawatt clusters.
Even if orbital compute never replaces terrestrial hyperscale facilities, it can become a complementary tier in a multi‑domain compute fabric.
That hybrid fabric would place latency‑sensitive services on edge and ground clouds, and put sunlight‑friendly, bandwidth‑sensitive workloads on orbit.
Practical trade calculations to watch (qualitative)
-
Mass vs. shielding trade‑off. More radiation shielding increases survivability but raises launch mass and costs.
-
Launch cost sensitivity. Google’s $200 USD/kg (≈ ¥1,460 RMB/kg) target matters because below this threshold the economics of orbital compute look far more plausible.
-
Workload fit. The best early customers will be those with high data volumes (remote sensing), tolerant latency, or expensive downlink costs.
-
Operational logistics. Regular maintenance, end‑of‑life disposal, and debris mitigation strategies will materially affect lifetime costs and regulatory acceptance.

Bottom line
Major cloud, chip and launch vendors are actively testing the technical feasibility and economics of moving compute to orbit.
The idea is no longer purely speculative; recent launches and corporate plans make orbit an active frontier for AI infrastructure.
Success depends on engineering advances, falling launch costs, radiation resilience, and robust space‑traffic and regulatory frameworks.
If those problems are managed, orbital compute could become one more flexible, solar‑rich option in the evolving AI stack.
Orbital data centers are emerging as a pragmatic complement to ground clouds rather than a wholesale replacement.



