Deploying applications on traditional cloud platforms often trades flexibility for convenience. Vendor lock-in drives up migration costs, API incompatibilities force rewrites, and billing models create unpredictable expenses. Centralized control also raises concerns about data transparency, censorship, and compliance with regional jurisdictions.
Fluence eliminates these constraints. When you deploy on Fluence, you gain a consistent interface across independent data centers without being tied to a single provider. You choose the hardware profile and geographic location, retain full control over data handling, and scale resources on your terms.
This guide walks through how to deploy on Fluence Virtual Servers step by step so you can test, launch, and manage workloads with confidence.
Platform Overview and Roadmap
Fluence operates as a compute marketplace rather than a single provider. Offers come from established Tier III and Tier IV data centers worldwide, many holding certifications such as ISO 27001, SOC2, and PCI DSS. Each provider lists available resources, giving transparency into cost, hardware specifications, and location.

Key advantages include:
- Lower cost than hyperscalers due to transparent per-epoch pricing (billed every 24 hours).
- No lock-in, since workloads can be redeployed to different providers without rewriting.
- Hardware choice, with the ability to filter by CPU type (e.g., AMD EPYC vs Intel Ice Lake), memory generation, or storage medium (SSD vs NVMe).
Available today: Virtual Servers across multiple global locations. Each VM is built on standardized “basic configurations” (e.g., 2 vCPU, 4GB RAM, 25GB storage), with options to add extra NVMe or SSD storage.
On the roadmap: Containers, GPU-backed instances for AI/ML workloads, and managed services for databases and orchestration.
Deploying Through the Web Console
The Fluence Console is the easiest entry point. After registration (via email, GitHub, or Google), users receive a self-custodial wallet backed by WebAuth MPC, removing the risks of seed-phrase loss while ensuring Fluence never controls private keys.
When deploying:
- Choose a data center location and review its tier/certifications.
- Pick a VM configuration (vCPUs, RAM, and minimum 25GB DAS storage).
- Add extra storage if needed (up to per-VM limits).
- Assign a name and configure networking (up to 10 open TCP/UDP ports; all others remain closed by default).
- Select an OS image. You can use defaults (Ubuntu, Debian) or supply a custom .qcow2 or .img file via a public URL.
- Provide at least one SSH key (RSA, ECDSA, or Ed25519) to enable access.
Once launched, the VM appears under “Running Instances,” with its public IP and metadata available within minutes.
Managing Instances After Deployment
Every active VM is visible in the console, showing:
- VM ID and status (Launching, Active, Terminated, etc.).
- Resource breakdown, including CPU model, RAM type, and storage details if provided by the data center.
- Datacenter info, including country, city, tier, and certifications.
- Next billing cycle, which occurs every day at 5:55 PM UTC.
Currently, the main management action is Terminate, which stops the VM and ends the rental. Reboot, reset, and rebuild options are planned. If a provider forcibly terminates a VM (e.g., due to insufficient funds or violation), the record remains in your dashboard until finalized, at which point reserved funds are released.
Deploying Programmatically with the API
For automation, the Fluence API exposes endpoints at api.fluence.dev.
Typical workflow:
- Search for offers with /marketplace/offers, filtering by hardware (e.g., AMD EPYC with NVMe), region, and budget.
- Estimate cost with /vms/v3/estimate to get projected USDC charges for your chosen configuration.
- Deploy VMs with /vms/v3, providing name, OS image URL, open ports, and SSH keys.
- Manage deployments by querying /vms/v3 to check status, patching open ports, or deleting VMs when no longer needed.
All endpoints return JSON and require an API key as a Bearer token.
How Abuse Cases are Handled
Providers or local authorities may report abusive workloads (such as DDoS or malware hosting). When this occurs, Fluence investigates, suspends the VM, and may capture a snapshot for analysis. Cases are reviewed individually, balancing compliance with user rights.
Scaling and Migration Today
At present, VM migration is manual. If you want to move workloads across providers, you deploy a new VM, copy over data, and cut traffic to the old one. Work is underway on intra–data-center migration, followed by cross-provider migration. Once implemented, this will allow near-seamless scaling and failover.
Conclusion
Deploying on Fluence feels familiar but offers more freedom than traditional clouds. The console gives a simple way to configure servers, while the API supports automation and fine-grained hardware targeting.
Fluence bridges the gap between decentralized resilience and enterprise-grade compute, combining transparency, flexibility, and global infrastructure.
Note: Fluence Virtual Servers are currently only open to Alpha testers, allowing granular control over compute, networking, and OS images. Soon, GPU container VMs will be added to the stack (among other features to come).
Full documentation lives at fluence.dev/docs.