Google’s Project Suncatcher: Why the Future Of AI Might Be In Space

AI keeps hitting the same bottleneck: power. Data centers could draw up to 9% of U.S. electricity by 2035, and new capacity is getting harder to find. Google’s response is bold. “Our TPUs are headed to space,” Sundar Pichai said, introducing Project Suncatcher, a plan to build AI data centers in orbit.

Why ship compute off-planet? Earth-based facilities run into grid limits, land use, and water for cooling. Space offers uninterrupted sunlight, no weather, and a shot at scaling without straining local infrastructure.

Why Orbit Helps

Suncatcher bundles three pieces: sunlight, chips, and lasers.

Orbit choice
A dawn-dusk sun-synchronous orbit stays in sunlight most of the time. Panels there do not deal with clouds or night, so power output is far steadier than on the ground.

Compute
Satellites carry Google’s Trillium v6e TPUs. The idea is to train and run models in space, right next to the energy source.

Networking
Free-space optical links stitch satellites into a single logical data center, moving data at very high rates with tight formation flying.

How The Solar Part Actually Works

Here is a clearer picture of the power chain.

More sunlight, more often
Above the atmosphere, solar panels receive higher-intensity light with no weather losses. In the chosen orbit, the panels see near-continuous illumination as the spacecraft skims the day-night boundary. That raises average output and cuts the need for large batteries.

Direct solar-to-compute path
Each spacecraft uses maximum power point trackers to pull the best possible wattage from its arrays. Power conditioning converts the panel output to a stable DC bus that feeds onboard systems and TPUs. Small batteries or supercapacitors smooth brief eclipse periods and load spikes instead of carrying full nighttime storage like ground systems often do.

Thermal balance
Panels convert sunlight to electricity and heat. Radiators dump the extra heat to space. Steady sunlight plus predictable thermal loads make temperature control easier to model than a ground site that swings with the weather.

Scaling by tiling
You do not build one giant array. You tile many identical solar-compute nodes and link them optically. Adding satellites raises both power and compute in lockstep.

The punchline is stability. Continuous generation lets the fleet behave like a power-plant-plus-data-center in one package, without the big batteries and peaky grid draws that complicate renewable-heavy facilities on Earth.

The Hard Parts Google Had To Tackle

High-speed links
Training large models needs bandwidth on the order of terabits per second between nodes. By flying satellites only 100 to 200 meters apart, laser links can hit very high rates. A lab setup already showed 1.6 Tbps with a single transceiver pair.

Radiation
Space can flip bits or fry components. Tests with Trillium v6e TPUs under proton beams showed surprising resilience, handling nearly triple the expected five-year dose for low Earth orbit without hard failures.

Formation control
Keeping a cluster tight and stable takes careful dynamics and frequent nudges. Differentiable physics models suggest the needed station-keeping is modest, which keeps propellant budgets reasonable.

Launch costs
The economics hinge on cheaper rides to orbit. If prices drop below $200 per kilogram in the mid-2030s, deployment starts to compare with the energy costs of a similar facility on Earth. Google openly credits rapid progress from commercial launch providers for making the plan thinkable.

From Whiteboard To Space

Google and Planet plan two prototype satellites for an early 2027 mission. The goals are to validate TPU behavior in the real environment, exercise the optical mesh, and show that distributed AI tasks can run across multiple spacecraft as if they are one machine.

Risks To Watch

Debris
Dense clusters add congestion. Collisions create fragments that threaten other satellites. Clear end-of-life plans and active avoidance are essential.

Brighter skies
More reflective hardware can interfere with astronomy. Coordination with observatories and design tweaks to reduce brightness will matter.

Practice The Suncatcher Mindset, On Earth

Space-scale ideas live or die on how people work together. The same habits that keep a satellite cluster humming also help teams move fast on tough problems with persistence, collaboration, and strong communication. Modern escape room experiences compactly model this. At Reason, groups lean into clear signals, quick experiments, and trust under time pressure, which maps neatly to Suncatcher’s needs for tight coordination and rapid iteration. You learn to surface partial insights early, merge them without ego, and make the next decision with imperfect data. That rhythm builds communication muscles you can take back to product sprints, ops reviews, or research runs, where attention, curiosity, and collaboration unlock more than any single genius ever could.

Finally…

Project Suncatcher reframes AI infrastructure as an orbital problem. The physics appears feasible, early tests are encouraging, and the cost curve is promising. The tradeoffs are real, especially around debris and astronomy, so stewardship matters as much as engineering. If it works, we get a template for sun-powered compute that scales past terrestrial limits and a wave of networking and software ideas that help systems on Earth, too.