Fluence Launches Global and Affordable GPU Compute for AI

Deploy at 85% lower cost

We are excited to announce the launch of GPU compute for AI workloads at substantially lower cost than centralized cloud providers. GPU containers are available immediately through the Fluence Platform, with GPU virtual machines and bare metal support planned for the coming weeks. This launch is supported by a partnership with Spheron Network as one of our key compute providers.

Addressing AI’s Compute Bottleneck

AI projects and companies face rising compute costs and hidden fees from hyperscalers, forcing teams into long-term, rigid pricing structures. We at Fluence are addressing customer demand for open, low-cost and short-term GPU access by expanding its offering from CPU-based virtual servers into GPUs, giving customers direct access to high-performance hardware at up to 85% lower cost than the large clouds. The addition of GPUs builds on Fluence’s expertise in offering CPUs and adds a key product that allows Fluence to address the growing AI ecosystem.  

Our CPU marketplace currently generates over $1 million in ARR with a pipeline exceeding $8 million in the billion-dollar third-party node provider market. Customers have saved $3.5 million through Fluence compared with centralized clouds.

Fluence’s decentralized infrastructure supports thousands of active blockchain nodes, and customers include Antier — ranked among the world’s largest blockchain service providers — as well as NEO, RapidNode, Zeeve, dKloud, AR.IO, Tashi, and Nodes Garden.

Our company Vision 2026 calls for scaling enterprise-grade decentralized compute and building a global GPU-powered marketplace to support a wide range of features requested by customers. The partnership with Spheron expands our provider network, already including Kabat, Piknik, and other top data center facilities.  

“Meeting the exponentially growing demand for AI requires cost-efficient access to enterprise-grade GPUs. By expanding our network using  Spheron’s decentralized GPUs, we give developers that access immediately, making our platform the go-to choice for serious AI builders scaling to the next level,” said Evgeny Ponomarev, Co-Founder of Fluence.

“Access to GPUs has been gated by scarcity and cost. Partnering with Fluence removes those barriers, giving AI teams dependable, decentralized compute power to move faster from research to deployment.,” added Prashant Maurya, Co-Founder of Spheron Network.

GPU Containers Live Today, VMs and Bare Metal Coming Next

GPU containers are live now on the Fluence Console, optimized for fine-grained AI workloads. Support for GPU VMs and bare metal will follow in the coming weeks, expanding options for AI projects and  companies seeking decentralized, enterprise-grade performance. 

Developers can start deploying today at fluence.network/gpu and review documentation at fluence.dev/docs.

Fluence’s entry into GPUs marks a clear, decisive step for DePIN: affordable, enterprise-grade compute delivered through a decentralized marketplace so builders can move faster with fewer constraints.

Launch GPUs in seconds at 80% lower cost

Start deploying on Fluence now

To top