Orbital data centers: Earth’s Power Isn’t Enough? Google and NVIDIA Are Moving Compute to Space

Orbital data centers are moving from sci‑fi thought experiment toward incremental tests and real satellite launches.

Shortages in electricity supply have emerged as a key bottleneck for building AI data centers.

Consulting firm FTI Consulting (Fùshìgāo Zīxún 富事高咨询) predicts that energy demand from U.S. data centers could nearly double by 2027, and requests to connect large facilities are already straining utilities and grid capacity.

Why the cloud is looking up—literally: space-based compute and the energy problem

Because power on Earth is limited, several Silicon Valley tech companies are exploring putting large-scale compute into orbit.

“Our TPUs are going to space!” Sundar Pichai (Sāngdá’ěr Píchāyī 桑达尔·皮查伊), CEO of Google (Gǔgē 谷歌), wrote recently on social media, announcing a new initiative called “Suncatcher” to study scalable machine‑learning compute systems in space.

It’s not just Google (Gǔgē 谷歌) talking about this future.

Elon Musk (Āilóng Mǎsīkè 埃隆·马斯克), CEO of SpaceX (Tàikōng Tànsuǒ Jìshù Gōngsī 太空探索技术公司), said SpaceX will build data centers in orbit—by scaling Starlink V3 satellites and equipping them with high‑speed laser links.

Jeff Bezos (Jiéfū Bèisūosī 杰夫·贝索斯), founder of Amazon (Yàmǎxùn 亚马逊), said that within 10–20 years humans will be able to build gigawatt‑scale data centers in space.

Decorative Image

Who’s shipping chips to orbit? The first compute missions

Some AI chips will be among the first hardware sent up.

Google has partnered with satellite‑imagery company Planet Labs to launch two test satellites in early 2027 to explore whether clustered, large‑scale orbital compute makes sense.

Space computing company Starcloud plans to launch a satellite carrying an NVIDIA (Yīngwěidá 英伟达) H100 GPU in November.

The mission—Starcloud‑1—will weigh about 60 kg (roughly the size of a small household refrigerator) and, the company says, will deliver up to 100× the GPU performance of earlier space compute platforms.

Starcloud has even described plans for a future 5 GW orbital data center with very large solar and radiator panels roughly 4 kilometers in each direction.

Resume Captain Logo

Resume Captain

Your AI Career Toolkit:

  • AI Resume Optimization
  • Custom Cover Letters
  • LinkedIn Profile Boost
  • Interview Question Prep
  • Salary Negotiation Agent
Get Started Free
Decorative Image

Energy advantage: why space makes sense for AI compute

The biggest advantage of orbital data centers is energy.

In orbit, solar arrays can receive far more continuous sunlight than ground‑based panels (no clouds, no night cycles in certain orbits), reducing dependence on batteries and backup generation.

Starcloud says orbital facilities wouldn’t need water‑based cooling and, aside from launch emissions, could cut lifecycle CO2 emissions by an order of magnitude compared with equivalent ground facilities.

Google’s blog post on the idea notes the Sun is the ultimate energy source in the solar system—vastly more powerful than human electricity demand—and that in the right orbit solar panels can generate roughly eight times the power per area compared with typical terrestrial installations, and generate it consistently.

Decorative Image

Costs, technical hurdles, and the economic case for space-based compute

High launch costs have been the main practical barrier to large‑scale space systems.

Google’s analysis suggests that based on historical data and projected launch prices, by the mid‑2030s launch costs could fall below $200 per kilogram—about ¥1,440 RMB ($200 USD) per kg using an exchange rate of ¥7.20 per $1 USD—at which point the combined cost of deploying and operating an orbital data center could be comparable to the energy costs of an equivalent ground‑based facility.

That said, major engineering challenges remain: thermal management in vacuum, long‑term on‑orbit system reliability, and radiation hardening.

Google says early tests of a next‑generation TPU exposed to simulated near‑Earth orbital radiation in a particle accelerator survived intact, but issues such as heat rejection, on‑orbit redundancy, and maintenance still need solving.

The two satellites planned for launch in 2027 are intended to test models and TPU hardware under real orbital conditions.

TeamedUp China Logo

Find Top Talent on China's Leading Networks

  • Post Across China's Job Sites from $299 / role, or
  • Hire Our Recruiting Pros from $799 / role
  • - - - - - - - -
  • Qualified Candidate Bundles
  • Lower Hiring Costs by 80%+
  • Expert Team Since 2014
Get 25% Off
Your First Job Post
Decorative Image

Industry expectations and the road from demos to hyperscale

Tech leaders see space as a potential growth frontier for AI compute.

Some company executives predict that within a decade orbital compute will be a mainstream option for scaling AI training clusters.

Others believe costs will steadily fall over several decades, making larger and larger space‑based facilities economically feasible.

But even supporters acknowledge that the path from demonstration satellites to multi‑GW orbital cloud farms will require breakthroughs in launch economics, on‑orbit manufacturing or assembly, and space operations.

For now the focus is on incremental testing: radiation resilience, communications links, thermal control, and the business case for shipping racks—or individual accelerators—into orbit.

ExpatInvest China Logo

ExpatInvest China

Grow Your RMB in China:

  • Invest Your RMB Locally
  • Buy & Sell Online in CN¥
  • No Lock-In Periods
  • English Service & Data
  • Start with Only ¥1,000
View Funds & Invest
Decorative Image

What founders, investors, and operators should watch next

  • Launch price curves: Track announcements from launch providers and market price trends toward the mid‑2030s price thresholds mentioned by Google.
  • Satellite compute demos: Watch the 2027 Planet Labs + Google test satellites and Starcloud‑1 with an NVIDIA (Yīngwěidá 英伟达) H100 GPU for real‑world performance and resilience signals.
  • Power-to-mass economics: Study the ratio of delivered solar power to payload mass for candidate orbits—this is the core energy argument for orbital compute.
  • Thermal and reliability R&D: Evaluate progress on heat rejection designs, redundancy strategies, and radiation‑hardened component roadmaps.
  • Regulatory & ops playbooks: Follow how governments and regulators treat on‑orbit infrastructure, deorbiting responsibilities, and spectrum/laser link rules.
Decorative Image

Decorative Image

Quick takeaways for investors and builders

Energy scarcity on Earth is now a strategic constraint for AI scale.

Orbital compute offers a unique value proposition: abundant, continuous solar power and the potential to slash lifecycle CO2 emissions, at least according to early company claims.

Initial launches in 2027 and missions like Starcloud‑1 will be the first hard signals investors and operators can use to judge technical feasibility and cost trajectories.

Even if large orbital cloud farms take decades to reach scale, the near‑term research, IP, and supply‑chain plays (satellite power, radiation shielding, laser comms, thermal radiators) are where smart founders and VCs will find early opportunities.

Decorative Image

Final thought

The push toward orbital data centers is a pragmatic response to terrestrial energy limits and an experiment in rethinking infrastructure for AI at scale.

Decorative Image

References

Orbital data centers

In this article
Scroll to Top