The current global AI buildout is the largest and fastest infrastructure project in the history of mankind. But it carries a strange contradiction: the technology powering the most sophisticated software ever built is being bought, sold, and operated like it's 2005.
The problem today is that users (organisations) still have to rent whole GPU servers, even when their customers only utilise a fraction of the capacity. For datacentre operators, this means huge CAPEX outlays to provide the GPU capacity to dedicate to each client, as no two users are able to share the same GPU. For the growing wave of neocloud operators - the hundreds of GPU cloud providers now competing with the major hyperscalers - this creates a punishing dynamic. Significant capital is tied up in hardware running at around 40% utilisation, with every idle hour compounding losses, resulting in many operating at thin or negative margins, trapped between the cost of their hardware and an inability to fully utilise it.

The Hosted·ai founders: Julian, Ditlev, Naren, and James
The root problem is technical, and it's specific to GPUs. Unlike CPU compute, GPU memory can't be sliced. If a workload has it, nothing else can run - even if that job is mostly idle. The industry's best attempt so far has been to partition GPUs into smaller chunks, but each slice suffers the same problem. It's a structural inefficiency baked into how the entire market operates.
Hosted·ai has built the fix.

One of Hosted·ai's offices in Bangalore
Their software sits between AI workloads and the hardware, letting each workload see the full GPU while dynamically caching memory on and off the chip in real time. It's true elastic virtualisation, a 3-5x overcommitment on the same hardware, without performance loss. For operators who were bleeding on idle capacity, this is the difference between negative margins and profitability within 12 months, without any additional capital expenditure.
Beyond utilisation, the platform enables something structurally important: a federated GPU marketplace where operators can publish spare capacity and subscribe to capacity from others, extending their geographic reach without owning the underlying infrastructure. For European companies and public sector organisations bound by data sovereignty rules, this opens up a credible alternative to relying on a handful of US hyperscalers.

Taking a quick break during GTC!
What’s unusual about Hosted·ai, compared to other early-stage bets we make, is the depth of the team's experience. Hosted·ai’s four founders have worked together for 15 years, across three generations of datacentre infrastructure. Ditlev Bredahl scaled UK2 Group from three people to 400+ and over £100M in revenue during the dotcom era, then during the cloud era, Ditlev co-founded OnApp - a platform serving 3,000 service providers - later acquired by Virtuozzo. Julian Chesterfield helped develop the Xen hypervisor at XenSource. James Withall served as CTO of OnApp for 11 years. Narendar Shankar drove service provider ecosystems at OnApp and held cloud leadership roles at VMware and NVIDIA.
This team has built this exact business twice before - first for the dotcom era, then for cloud. They understand better than anyone how infrastructure gets abstracted, productised, and scaled. The GPU market will be orders of magnitude larger than the two that came before it, and they are building the connective tissue that makes it work.
We're proud to lead Hosted.ai's $19M Seed alongside People Ventures and Repeat VC.