Key Points
- Milestone launch: Nvidia (英伟达) and Starcloud sent the first H100‑equipped server into orbit on Nov 2, 2025 with SpaceX (太空探索技术公司); a second test satellite is planned for 2026.
- Ambitious scale & stats: Starcloud targets a 40‑megawatt space data center by the early 2030s at ~100 metric tons; Google (谷歌) expects launch costs could fall to $200/kg (~¥1,460) by the mid‑2030s.
- Operational advantages: Radiative cooling, near‑continuous solar power, and the potential for higher compute density make orbit attractive for large‑scale training, imagery processing, and bandwidth‑heavy workloads.
- Risks & investor implications: Radiation, debris, latency, lifecycle and regulatory hurdles create engineering and policy challenges, but open new investable verticals (launch, thermal systems, robotic servicing) where integrated players may gain outsized advantages.

space data centers — a fast‑moving trend reshaping how AI compute scales beyond Earth.
AI’s power appetite pushes computing off the planet
AI’s power appetite is pushing compute infrastructure into new territory: orbit.
As AI training and inference demand surges, power grids in multiple regions have started to show signs of strain.
Major technology companies including Nvidia (Nvidia Yīngwěidá 英伟达) and Google (Gǔgē 谷歌) are shifting the next generation of compute infrastructure toward Earth orbit — betting that space-based AI compute can be a scalable path to massive, low‑cost AI compute.
Resume Captain
Your AI Career Toolkit:
- AI Resume Optimization
- Custom Cover Letters
- LinkedIn Profile Boost
- Interview Question Prep
- Salary Negotiation Agent

Nvidia and Starcloud launch the first H100‑equipped server into orbit
On November 2, 2025, Nvidia (Nvidia Yīngwěidá 英伟达) and an Nvidia‑backed startup, Starcloud (Starcloud), worked with Space Exploration Technologies Corp. (SpaceX; Tàikōng Tànsuǒ Jìshù Gōngsī 太空探索技术公司) to put the first satellite carrying an H100 GPU into orbit aboard a Falcon 9 rocket.
The in‑orbit server will be used to test complex tasks such as real‑time analysis of Earth observation data and running AI models under operational conditions.
Starcloud plans to launch a second, test‑generation satellite in 2026 and aims to build a 40‑megawatt space data center by the early 2030s, with a total mass projected at roughly 100 metric tons.
Quick technical snapshot:
- Payload: H100 GPU in orbit for the first practical testing of in‑space AI inference and workloads.
- Timeline: Second test satellite in 2026; ambitious 40‑megawatt goal by early 2030s.
- Mass target: ~100 metric tons for a full space data center platform.
Why this launch matters to investors and founders
It’s the first public, hardware‑level test of modern datacenter GPUs operating in orbit.
That changes the conversation from theoretical feasibility to operational validation.
For investors and founders, this means a new category of capital needs — satellites plus datacenter engineering — and fresh markets for launch, power management, and in‑orbit maintenance services.

Why move data centers to space?
One key driver is cooling and energy.
On Earth, large data centers require massive power and cooling systems — often relying on evaporative cooling that consumes large quantities of fresh water.
Starcloud’s co‑founder Philip Johnston said space deployments would relieve terrestrial resource demands: cooling in orbit can be done by radiating infrared heat into deep space rather than evaporating freshwater.
Another driver is energy density.
Space offers access to near‑continuous solar energy and far lower constraints on heat rejection, enabling potentially denser compute per unit mass than is practical on the ground.
Key operational advantages to watch
- Radiative cooling: Reject heat by radiating into deep space instead of consuming water or chilled air.
- Near‑continuous solar power: Higher available energy per mass compared with most terrestrial sites.
- Higher compute density: Fewer ground constraints could mean more compute per kilogram, assuming thermal and radiation shielding are solved.
Find Top Talent on China's Leading Networks
- Post Across China's Job Sites from $299 / role, or
- Hire Our Recruiting Pros from $799 / role
- Qualified Candidate Bundles
- Lower Hiring Costs by 80%+
- Expert Team Since 2014
Your First Job Post

SpaceX and Elon Musk: solar‑powered AI satellites
SpaceX (Space Exploration Technologies Corp.; Tàikōng Tànsuǒ Jìshù Gōngsī 太空探索技术公司) — the Falcon 9 launch provider — is positioning its Starlink platform as a foundation for orbital compute.
CEO Elon Musk (Elon Musk Mǎsīkè 马斯克) has said that if you want access to energy levels millions of times greater than what’s available on Earth, you have to go to space.
Musk estimated that within four to five years the cheapest way to run large‑scale AI computing could be via solar‑powered AI satellites.
Practical implication: Integrating launch, communications (Starlink), and in‑orbit power could unlock vertically integrated cost advantages for companies that control multiple pieces of the stack.
ExpatInvest China
Grow Your RMB in China:
- Invest Your RMB Locally
- Buy & Sell Online in CN¥
- No Lock-In Periods
- English Service & Data
- Start with Only ¥1,000

Google’s “Sun Catcher” plan for orbital TPUs
Google (Gǔgē 谷歌) announced its own program — described as the “Sun Catcher” plan in Chinese coverage — to move AI data center capacity into orbit.
The project envisions satellite constellations forming an orbital compute platform equipped with Tensor Processing Units (TPUs) and optical inter‑satellite communications.
Google expects initial test payloads to launch around early 2027 and projects that, by the mid‑2030s, launch costs could fall to under $200 USD per kilogram — approximately ¥1,460 RMB ($200 USD) — enabling the launch and operation costs of a space data center to be comparable to the energy costs of an equivalent ground facility.
Why Google’s announcement matters for cloud and enterprise customers
- TPUs in orbit signal a push beyond GPUs into custom silicon for in‑space inference and training.
- Optical inter‑satellite links aim to increase bandwidth between nodes without relying on ground relays.
- Cost tipping point: Google explicitly connects launch costs to operational parity — if launch drops, the business case tightens.

Technical and operational challenges for orbital compute
Despite the promise, analysts warn of multiple technical, operational and environmental challenges:
- Space environment risks — radiation, solar flares and charged particles can damage electronics or interrupt communications.
- Space debris and collision risk — adding large numbers of massive satellites without a robust debris‑mitigation and removal plan could raise collision probabilities and create cascading risks for space infrastructure and services that modern life depends on.
- Latency and bandwidth tradeoffs — while on‑orbit compute can process Earth observation and certain AI workloads locally, many real‑time applications still rely on terrestrial networks; optical inter‑satellite links reduce but do not eliminate connectivity challenges.
- Lifecycle, maintenance and end‑of‑life planning — servicing or safely deorbiting 100‑ton platforms raises cost and engineering questions that differ from ground data centers.
Operational mitigations to monitor (non‑exhaustive)
- Radiation hardening and redundancy: design electronics for degraded environments and plan for graceful degradation of services.
- Debris strategy: invest in active debris removal partnerships, collision‑avoidance protocols, and standardized end‑of‑life deorbit plans.
- Network architecture: mix on‑orbit processing for heavy tasks with terrestrial edge networks for low‑latency services.
- Maintenance models: explore robotic servicing, modular replaceable units, or secure human servicing agreements where feasible.

What this means for AI, infrastructure, and investors
Space data centers are not a universal solution, but they represent a strategic lever for companies seeking to scale AI compute beyond terrestrial energy and cooling limits.
If launch costs continue to fall and in‑space power/cooling architectures prove reliable, orbiting compute may become cost‑competitive for certain classes of AI workloads — especially large‑scale training, satellite imagery processing, and bandwidth‑heavy remote sensing tasks.
However, the move will necessitate new norms and regulations for orbital traffic management, debris mitigation, and cross‑border data governance.
The success of space compute depends not only on engineering and economics, but on international coordination to keep low Earth orbit safe and sustainable.
Investor takeaways
- Space compute opens new investable verticals: launch services, thermal/radiative systems, in‑orbit power management, robotic servicing, and secure orbital networking.
- Early adopters that can combine launch access, satellite comms, and compute hardware may capture outsized advantages.
- Regulatory and sustainability risks are material; plan for long timelines and policy engagement as part of any go‑to‑market strategy.

Use cases that are a natural fit for orbital compute
- Satellite imagery processing: process massive image datasets where the data is collected, reducing downlink bandwidth needs.
- Large‑scale model training: leverage higher power density and continuous solar energy for compute‑heavy training runs.
- Remote sensing and bandwidth‑heavy science: perform on‑site analytics for maritime, climate, and defense sensors.
Final thought:
This is a classic technology stack shift: when core constraints (power, cooling, launch cost) change, architectures and business models follow.
The companies that win will be the ones that align hardware, launch logistics, comms, and long‑term sustainability practices into coherent, investable platforms.
space data centers

References
- 科技巨头抢滩太空算力:太空数据中心点燃AI新战场 – CCTV财经
- NVIDIA Newsroom (space compute and H100 launches) – Nvidia
- SpaceX (company information and Falcon 9 launches) – SpaceX
- Google Research (orbital compute / TPU research) – Google





