build here

Three things to ship.

Or anything else you dream up.

01Alpha + risk

AI memecoin screeners

Track every Clanker launch in real-time. Score contracts for risk, surface alpha, flag honeypots. Paid dashboards with token-gated tiers — CT alpha aggregators, whale trackers, contract risk scanners. USDC-settled subscriptions.

02Per-project bots

Farcaster + Telegram bot factories

Custom-trained bots per project — Telegram trading bots, Farcaster auto-posters, raid coordination, airdrop alert bots, onchain research copilots. Multi-tenant from day one, each project on its own subdomain.

03Token-gated dashboards

Airdrop + claim portals

Per-launch dashboards with allowlist verification, claim flows, holder tiers, token-gated meme studios. Live data over websockets, persistent state, custom-domain projection per tenant.

what runs inside

Run the agents you already use.

On any SOTA LLM you choose.

wrkr ships every native AI coding harness as a first-class option. Each harness handles models, sessions, MCP, slash commands, the agent loop. The substrate runs them all.

75+LLM providers via Models.dev

OpenCode pulls its provider catalog from Models.dev — the registry every major lab and inference platform contributes to. When a SOTA model lands at any of them, OpenCode picks it up automatically. wrkr inherits all of it.

model labs
Anthropic · OpenAI · Google · xAI · Mistral · DeepSeek · Cohere
inference platforms
Groq · Together · Fireworks · Cerebras · Replicate · OpenRouter
cloud catalogs
AWS Bedrock · Azure · Hugging Face
local
Ollama · LM Studio · llama.cpp

Bring your own keys (BYOM) or use wrkr-managed credits. The harness inside the VM doesn't care which is active — it just makes the call.