World’s largest orbital computing cluster opens for commercial use

The largest orbital compute cluster is now operational, offering space-based data processing and redefining cloud computing beyond Earth.

Apr 14, 2026 - 22:12
 2
World’s largest orbital computing cluster opens for commercial use
Image Credits: Kepler Communications

Despite the growing buzz around space-based data centres, there are still very few GPUs operating in orbit. However, as this begins to shift, the early-stage commercial landscape for orbital computing is gradually taking shape.

At present, the largest computing cluster in orbit was deployed by Kepler Communications, a Canadian firm, in January. The system includes around 40 Nvidia Orin edge processors distributed across 10 active satellites. These satellites are interconnected through laser communication links, forming a coordinated network in space.

Kepler has already secured 18 customers and revealed its latest partner on Monday — Sophia Space, which plans to test its specialised orbital computing software using Kepler’s satellite constellation.

Industry experts believe that full-scale orbital data centres, such as those envisioned by SpaceX or Blue Origin, are unlikely to become a reality until the 2030s. In the meantime, the immediate focus is on processing data directly in orbit to enhance the performance of space-based sensors used by both private companies and government organisations.

Kepler, however, does not position itself as a traditional data centre operator. CEO Mina Mitry explained that the company’s goal is to build infrastructure that supports applications in space. The company aims to provide a networking layer that can serve satellites, as well as aerial systems such as drones and aircraft operating below.

Meanwhile, Sophia Space is working on developing passively cooled computers designed for space environments. One of the biggest challenges in deploying large-scale computing systems in orbit is managing the heat generated by powerful processors. Traditional active cooling systems are heavy and expensive to launch, so Sophia is exploring alternative passive cooling methods.

As part of their collaboration, Sophia will upload its proprietary operating system to one of Kepler’s satellites. The plan is to deploy and configure the system across six GPUs spread over two spacecraft. While this kind of setup is standard in Earth-based data centres, it has never been executed in orbit before. Successfully running this test will be a critical step for Sophia as it prepares for its first satellite launch, currently scheduled for late 2027.

For Kepler, the partnership serves as a demonstration of its network’s capabilities. Currently, the company handles data uploaded from Earth or collected through hosted payloads on its own satellites. Looking ahead, Kepler expects to connect with third-party satellites, enabling broader networking and processing services across space-based systems.

Mitry noted that satellite operators are already designing future systems around this model, particularly for high-power sensors such as synthetic aperture radar. The U.S. military is a significant customer for these capabilities, especially as it develops next-generation missile defence systems that rely on satellites to detect and track threats. Kepler has already conducted a demonstration involving a space-to-air laser communication link for the U.S. government.

This type of edge processing — where data is handled close to where it is generated — is expected to be the first major application of orbital computing infrastructure. It allows for faster response times and more efficient data handling, making it a practical starting point for the industry.

This approach differentiates companies like Sophia and Kepler from established players such as SpaceX and Blue Origin, as well as newer startups like Starcloud and Aetherflux, which are raising significant funding to build large-scale orbital data centres equipped with traditional data-centre-grade processors.

Mitry explained the company’s perspective by saying that the future of space computing is more about inference than training. “Because we have the belief it’s more inference than training, we want more distributed GPUs that do inference, rather than one superpower GPU that has the training workload capacity,” he said. He added that systems that consume large amounts of power but operate at low utilisation are inefficient, whereas their GPUs are designed to run continuously.

Looking ahead, once these technologies are validated in orbit, the possibilities could expand significantly. Rob DeMillo pointed out that restrictions on building data centres on Earth — such as a recent ban in a Wisconsin city — could make space-based computing more appealing. Similar discussions are also emerging among lawmakers in the U.S. Congress.

“There are no more data centres in this [city],” DeMillo remarked. “It’s gonna get weird from here.”

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Shivangi Yadav Shivangi Yadav reports on startups, technology policy, and other significant technology-focused developments in India for TechAmerica.Ai. She previously worked as a research intern at ORF.