Simple, Transparent Pricing

Predictable, usage-based billing - no hidden fees, no surprises

How Pricing works?

Simple pay-per-use model, charged per minute - so you only pay for what you actually use.

Pricing is calculated across three key dimensions

Compute
Storage
Networking
No complex billing
No hidden charges
Easy to forecast your costs
What is an Instance Type?
  • A ready-to-go combination of CPU and Memory.
  • Just choose what fits your workload
  • No complex sizing required.

Instance type classes

We offer

Three Instance Classes

to match your work needs

Class Best For Configuration Best Configuration
Regular - r-class
(balanced)
General-purpose workloads Balanced CPU & Memory 4 GB RAM / 1 vCPU
Memory intensive - m-class
(memory-optimized)
Data bases, memory-heavy applications Higher Memory, Lower CPU 8 GB RAM / 1 vCPU
CPU intensive - c-class
(compute-optimized)
Compute-heavy tasks Higher CPU, Lower Memory 2 GB RAM / 1 vCPU

Prices are converted using live exchange rates

1. Compute Price

Memory Intensive - m-class (memory-optimized)
Instance Type GiB CPU Cost per hour Monthly Price
Loading...
CPU Intensive - c-class (compute-optimized)
Instance Type GiB CPU Cost per hour Monthly Price
Loading...
Regular - r-class (balanced)
Instance Type GiB CPU Cost per hour Monthly Price
Loading...

2. Storage Price: $ 0.00017 per GiB per hour | Monthly: $ 0.12 per GiB

3. Networking Price: $ 0.00017 per GiB per hour | Monthly: $ 0.12 per GiB

Access leading foundation models with simple token-based pricing through a unified API.

Pricing is calculated across three key dimensions

Model
Input Tokens
Output Tokens
No complex billing
No hidden charges
Easy to forecast your costs
Low Cost AI

Starting from $0.05 per 1M tokens

Multiple Model Providers

OpenAI, Anthropic, Google, Mistral, DeepSeek, Qwen and more.

Unified API

Switch models easily by changing the API endpoint.

Token-based pricing

Pay only for what you use

No minimums, no lock-in

Learn more about pricing and usage details.

Model Input Cost ($/1M tokens) Output Cost ($/1M tokens) Best For
DeepSeek V3.2 $0.25 $0.40 Low-cost AI
Claude Sonnet 4.6 $3.00 $15.00 High-quality reasoning
Gemini Flash $0.50 $3.00 Fast responses
Mistral 14B $0.20 $0.20 Lightweight AI
GPT-5.2 Chat $1.75 $14.00 Conversational AI
* Use the calculator below to estimate your AI inference costs. These are sample calculations based on your token input/output and selected model. Actual costs may vary.
LLM Cost Calculator
Estimated Cost
Input Cost $0.000000
Output Cost $0.000000
Total Cost per request $0.000000

Who can use Nimbuz?

Developers & Freelancers

Build side projects, portfolios, or client apps—without DevOps hassles

Small Teams & Startups

Perfect for SaaS tools, e-commerce, or internal apps. Grow at your pace.

Medium & Large Enterprises

Streamline cloud operations, accelerate releases and avoid vendor lock-in.

Team Collaboration