Key Points
- Energy driver: Orbital compute leverages continuous solar energy to address rising power needs — FTI Consulting warns U.S. data‑center energy demand could nearly double by 2027, and projects envision designs up to a 5‑gigawatt orbital data center.
- Near‑term tests: Major players are pursuing demos — Google (Gǔgē 谷歌) (Suncatcher) will test TPUs with two satellites in early 2027, while Starcloud plans to launch an Nvidia H100 on Starcloud‑1 this November (satellite ≈ 60 kg).
- Economics hinge on launch costs: Models show orbital systems could be competitive if launch prices fall to about $200 USD/kg (≈¥1,440 RMB/kg).
- Technical limits and target workloads: Remaining barriers include radiation, thermal management, on‑orbit maintenance and latency; likely early use cases are large‑scale offline AI training and batch workloads.

orbital compute is moving from sci-fi pitch decks into real hardware tests this decade.
Power shortages for AI data centers have pushed major tech companies to explore a radical idea: build compute facilities in orbit where sunlight—and therefore energy—is abundant.
Companies including Google (Gǔgē 谷歌) and Nvidia (Yīngwěidá 英伟达) are publicly exploring space-based machine-learning compute, and new startups say advanced GPUs will reach orbit soon.
Why move data centers into space?
Consulting firm FTI Consulting has warned that U.S. data-center energy demand could nearly double by 2027, straining utilities and grid capacity.
For large AI training clusters—whose power needs are enormous—access to cheap, continuous renewable energy has become a key constraint.
Putting compute into orbit solves one central problem: energy availability.
In the right orbit, solar arrays can generate many times the per-square-meter energy available on the ground and do so continuously without night, clouds or rain.
That promise is motivating both hyperscalers and private space firms to evaluate orbital compute architectures.
Resume Captain
Your AI Career Toolkit:
- AI Resume Optimization
- Custom Cover Letters
- LinkedIn Profile Boost
- Interview Question Prep
- Salary Negotiation Agent

Who’s working on orbital compute? — companies, satellites, GPUs
- Google (Gǔgē 谷歌): CEO Sundar Pichai (Sāngdá’ěr Píchāyī 桑达尔·皮查伊) announced a project called “Suncatcher” to explore scalable machine-learning compute in space and said Google plans to test TPUs (Tensor Processing Units, 张量处理器) in orbit.
- Google and satellite-imagery firm Planet Labs (Xíngxīng Shíyànshì 行星实验室) plan to launch two test satellites in early 2027 to validate models and TPU hardware.
- SpaceX (SpaceX 太空探索技术公司 Tàikōng Tànsuǒ Jìshù Gōngsī): CEO Elon Musk (Àilún Mǎsīkè 埃隆·马斯克) has said SpaceX will expand Starlink V3 satellites with high-speed laser links to host data-center capabilities in space.
- Jeff Bezos (Jiéfū Bèisuǒsī 杰夫·贝索斯): Bezos has said that in the next 10–20 years humans should be able to build gigawatt-scale data centers in space, powered by continuous solar energy.
- Startups / space-compute firms: Starcloud plans to launch a satellite carrying an Nvidia H100 GPU this November.
- The company says the Starcloud-1 satellite weighs about 60 kg (roughly the size of a small refrigerator) and will deliver GPU performance vastly higher than previous space compute platforms.
- Starcloud’s long-range plan includes a 5-gigawatt orbital data center with multi‑kilometer solar and cooling arrays.

What are the technical and economic trade-offs?
Benefits
- Energy: Space-based solar panels facing the sun continuously can produce far more power per unit area than ground panels and eliminate most weather-related downtime.
- Cooling: Orbiting systems largely avoid water-based cooling constraints that affect terrestrial data centers.
- Carbon: Proponents argue lifecycle CO₂ emissions could be lower—Google and others estimate multiple-times reductions compared with some ground-based sites—because orbital systems can use low‑cost renewable energy without building out additional terrestrial generation or grid upgrades.
Challenges and costs
- Launch costs: High launch prices have been the main barrier.
- Google’s paper estimates that if launch costs fall to about $200 USD per kilogram (≈¥1,440 RMB per kg) then the total cost of deploying and operating a large orbital compute system could be comparable with the energy costs of an equivalently sized terrestrial data center.
- That projected $200 USD/kg (≈¥1,440 RMB/kg) is a scenario planners use to model long‑term feasibility—not a current price guarantee.
- Radiation and reliability: Hardware in low Earth orbit faces higher radiation levels.
- Early tests show next‑generation TPUs can survive simulations of near‑Earth radiation, but real orbital testing is required to validate long-term reliability and thermal management strategies.
- Thermal management, on‑orbit maintenance and decommissioning: Heat rejection in vacuum, robotic servicing, replacement strategies, and orbital debris mitigation are complex engineering and regulatory problems.
- Latency and connectivity: For many applications the increased latency to orbit makes space compute unsuitable.
- Use cases will likely focus first on large-scale offline AI training and batch workloads where latency is less critical.
Find Top Talent on China's Leading Networks
- Post Across China's Job Sites from $299 / role, or
- Hire Our Recruiting Pros from $799 / role
- Qualified Candidate Bundles
- Lower Hiring Costs by 80%+
- Expert Team Since 2014
Your First Job Post

Short-term milestones and tests — what to watch next
- Google and Planet Labs plan to launch two test satellites in early 2027 to assess TPU performance in orbit and to refine design, control and communications approaches.
- Starcloud aims to put an Nvidia H100 GPU into orbit this November (Starcloud-1) as a demonstration of advanced GPU compute in space.
- These tests will target three core questions:
- Can modern accelerators like TPUs and H100s survive real orbital radiation and thermal cycles?
- Can laser or RF links provide sufficient throughput and acceptable latency for the intended workloads?
- Do launch-cost trajectories and on-orbit servicing strategies make the economics competitive versus ground-based expansions?
ExpatInvest China
Grow Your RMB in China:
- Invest Your RMB Locally
- Buy & Sell Online in CN¥
- No Lock-In Periods
- English Service & Data
- Start with Only ¥1,000

Outlook — who wins if orbital compute scales?
Industry leaders and startups frame orbital compute as a long-term, complementary expansion of global compute capacity—especially if launch prices continue to fall and on-orbit engineering issues can be solved.
Some executives predict that within a decade space will be a mainstream option for expanding AI compute.
Others caution the idea remains exploratory and expensive today.
Whether orbital compute becomes a major trend depends on several factors:
- Sustained decline in launch costs.
- Robust radiation‑hardened hardware.
- Reliable on‑orbit maintenance and servicing chains.
- A clear set of workloads that favor power‑dense, always‑sunlit environments over the lower-latency and lower‑risk terrestrial cloud.
Quick takeaways for investors, founders, techies and marketers
- Investors: Look for startups focusing on launch cost arbitrage, on‑orbit cooling and radiation-hardened accelerators.
- Founders: Consider product-market fit for batch AI training, data-intensive offline workloads, and satellite-friendly software stacks that tolerate higher latency.
- Tech leads: Track TPU/H100 orbital test results closely; those will be the clearest signal for hardware reliability in LEO.
- Marketers: Position space compute as complementary to edge and ground clouds, not a wholesale replacement.
Why this matters now
Data-center power demand is rising fast.
Hyperscalers and startups are already investing in test missions that will prove or disprove the major technical assumptions.
If launch costs dip toward the modeled $200 USD/kg (≈¥1,440 RMB/kg) range and TPUs/H100s prove robust, orbital compute could unlock a new class of power-dense, solar‑native AI infrastructure.
If those conditions don’t arrive, orbital compute will remain an exploratory niche with heavy engineering tradeoffs.
Related keywords and concepts to explore
- Space-based compute
- Space data centers
- AI training in orbit
- Launch cost per kilogram
- Radiation-hardened hardware
- On-orbit maintenance
- Low Earth orbit (LEO)
- Solar arrays and power density
Final note: keep watching hardware tests like Google’s Suncatcher satellites and Starcloud-1, because their results will decide whether orbital compute stays experimental or becomes a core part of AI infrastructure.

References
- 地球电力不够用?谷歌、英伟达开始将算力运上太空 – 第一财经
- Suncatcher: Google’s space compute exploration (Suncatcher research overview) – Google
- Starcloud plans H100 GPU satellite launch and orbital data-center roadmap – Starcloud





