pricing

One persistent VM.

One bill.

Every plan runs on a dedicated Firecracker microVM — the same isolation tech AWS Lambda runs on. Bring your own model API keys, or use wrkr-managed credits when you don't want to deal with provider auth. Settled in USDC on Base.

infrastructureFirecracker microVMs · unlimited projects · real Linux
plan
Builder
$9/mo
1 vCPU
1 GB RAM
20 GB disk
sleep-on-idle

Personal projects. Side bets. The VM freezes when idle, wakes on workspace open.

Start building
most popular
plan
Ship
$29/mo
2 vCPU
4 GB RAM
50 GB disk
always-on · 24/7

First paying customers. RAM committed 24/7 — production webhooks, customer-facing bots, scheduled jobs.

Ship a product
plan
Scale
$79/mo
4 vCPU
8 GB RAM
100 GB disk
always-on · 24/7

Real revenue. Hundreds of tenants. Multi-unit deploys with private workers + cron + persistent state.

Scale tenants
plan
Power
$199/mo
8 vCPU
16 GB RAM
200 GB disk
always-on · 24/7

Heavier workloads. Bigger Chrome browser session graph, larger persistent state, more concurrent tenants.

Run at scale

llm payment rails

Two ways to pay for inference.

Hot-switch between them at any time. The agent inside doesn't care which is active — it just makes the call.

BYOMbring your own keys

wrkr never sees the keys. Free-tier-friendly. Full provider catalog from OpenCode.

  • 75+ LLM providers via OpenCode + Models.dev
  • Anthropic, OpenAI, Google, Groq, Together, Fireworks, OpenRouter, Hugging Face, local models — all supported
wrkr-managedwrkr custodies the keys

Pay per-call from prepaid credits in your wrkr balance. Top up with USDC on Base or USD via Stripe (when added).

  • transparent metering · per-call cost truth
  • hot-switch BYOM ↔ managed at any time