how it works

Three roles.

One window.

Three roles. One window. — you the builder, the AI builds + ships, your customers in many tenants.
01the builder

You

an idea · a description
02wrkr's substrate

The AI

a multi-tenant SaaS
03many tenants

Your customers

isolated logins · USDC settled per tenant

You stay non-technical. The AI does the engineering. Your customers use a real SaaS, paying in USDC.

the hidden cost

It isnt the LLM.

Its the glue.

what you pay todaymonthly bill
Vercel
hosting
$
Supabase
database
$
Inngest
cron + workers
$
Pusher
websockets
$
QStash
queues
$
Hookdeck
webhooks
$
Doppler
secrets
$
Datadog
logs
$
Cloudflare R2
storage
$
Zapier / n8n / Pably
workflow glue
$
Cursor / Lovable / Bolt
AI coding tooling
$
eleven dashboards. eleven oncall pages. eleven monthly bills. and the LLM hasn't even been called yet.
what you pay on wrkrmonthly bill
wrkr
VM substrate · all of the above, native
$
Claude / OpenAI / Gemini
LLM you'd pay anyone
$
one VM.
one bill.
one console.

the agent writes the workflow as code. cron, websockets, queues — all native.

The LLM bill you would pay anyone. The orchestration tax wrkr deletes.

the orchestration

wrkr is the substrate.

The agent wires the rest.

wrkr gives you a real Linux machine — a dedicated Firecracker microVM, the same KVM-based isolation tech that runs AWS Lambda and Fargate. Multi-unit deploy, multi-tenant routing, persistent state, slug-routed public URLs. The AI inside writes your integrations — auth, payments, email, billing, the lot. Bring your preferred providers, or use wrkr's built-in rails. The AI picks up either way.

substrateFirecracker microVMs · sub-second resume · real isolation
authClerk · Auth0 · Supabase Auth
paymentsStripe · Paddle · or wrkr's USDC rail
emailResend · Postmark · SendGrid
LLM inferenceClaude · OpenAI · Gemini · or wrkr-managed credits
GPU · image · videoRunPod · Fal · Replicate · Modal
edge CDNCloudflare · Fastly

The agent integrates whichever you pick. wrkr doesn't lock you to a stack — it runs the substrate and writes the glue.

the moat

AI codes on real Linux.

Comment on the preview.

Watch the agent drive Chrome.

Five capabilities the substrate makes possible — every one a native primitive, every one earning its keep.

01Coding substrate

AI codes on real Linux.

Drop a prompt — OpenCode runs inside a hosted Linux VM with a real terminal, an ext4 disk that persists, and the package manager you actually need. The agent reads, plans, edits, runs builds. Every project keeps its own folder, its own ports, its own state.

02Preview Workbench

Comment on the live preview.

Click any element on the served app. Type 'make this bigger' or 'change to emerald'. The agent edits source. The preview rebuilds — in seconds, on your hosted machine.

03Browser substrate

Watch the agent drive Chrome.

VM Chrome with a persistent profile per tenant. Logged-in sessions survive workspace reopens. The agent navigates, fills forms, scrapes — and one click hands the keyboard to you for 2FA, OTPs, captchas. Hand back when you're done.

04Multi-tenant deploy

Multi-tenant in one declared release.

One operator. Many crypto-native subdomains. Declare the release once — wrkr deploys per-tenant, each on its own custom domain. pumpwatch.io, whalealert.app, clankerscreener.xyz — all live, all isolated, all USDC-settled.

05Multi-unit deploy lane

Web · worker · cron in one ship.

Real Linux on a dedicated Firecracker microVM — sub-second resume, real isolation, the same tech AWS Lambda runs on. Declare web + private worker + scheduled cron + storage as one multi-unit release group. wrkr supervises every unit, projects the public URL, keeps the private workers VM-local.