The problem
You need AI to do real work — not just generate text, but search your vaults, run legal research, compile reports, and upload results. But LLM API calls are stateless: no file system, no tools, no persistence.The solution
An agent is a reusable AI executor that runs in an isolated sandbox with the full Case.dev platform. Define it once with instructions, then create runs against it with different prompts. Each run spins up a dedicated container with OpenCode (AI coding agent) and thecasedev CLI pre-authenticated. The agent can search vaults, run legal research, process documents, write files, and upload results — autonomously.
Quick start
How it works
- Define an agent with a name, instructions, and optional model/vault restrictions
- Create a run with a prompt — this queues it without executing
- Execute the run — a durable workflow spins up a sandbox and starts the AI
- Poll or watch — check status or register a callback URL for completion notifications
- Get results — full output plus every tool call, token count, and execution logs
Create and execute are separate steps by design. This lets you register a
watcher callback before execution starts, or batch-create
multiple runs and execute them on a schedule.
Next steps
Create & Configure
Define agents with instructions, model selection, and vault restrictions.
Execute Runs
Create runs, execute them, and understand the lifecycle.
Monitor & Analyze
Poll status, get full audit trails, and register webhooks.
Sandbox Environment
What’s inside the sandbox — tools, CLI, and capabilities.