Documentation
You are helping me build an orchestrator that runs inside WarpDrive — a local-first sandbox runtime for AI agents. Below is the complete public integration contract: the lifecycle, the manifest schema, the runtime environment, and the API. Use this as your authoritative reference. Don't invent fields or behaviours that aren't documented here. When you generate code, scripts, or a manifest file, follow this contract exactly. Now help me design and build the orchestrator according to the spec below. --- # Orchestrator Integration This guide is for **orchestrator authors** — people writing the runtime that runs *inside* a WarpDrive sandbox. If you just want to use WarpDrive with someone else's orchestrator, you don't need any of this. WarpDrive's job is to give your orchestrator a clean, isolated, MCP-native sandbox. Your orchestrator's job is to do the actual work inside it. The contract between the two is small and deliberate. ## The model in one screen An orchestrator is a Git repo with a `warpdrive.manifest.yml` at its root. WarpDrive points at that repo, copies it into the sandbox, runs your manifest's `setup.command`, and then invokes the actions you've declared. The full lifecycle, end to end: 1. The user selects a host workspace path. WarpDrive bind-mounts it into the sandbox at `/ws`. 2. The user selects an orchestrator Git repo + branch. WarpDrive clones it to a host cache. 3. When the sandbox starts, WarpDrive copies the cached orchestrator into the sandbox at `/home/developer/.warpdrive/orchestrators/active`. 4. WarpDrive runs `helm setupOrchestrator Integration
This guide is for orchestrator authors — people writing the runtime that runs inside a WarpDrive sandbox. If you just want to use WarpDrive with someone else’s orchestrator, you don’t need any of this.
WarpDrive’s job is to give your orchestrator a clean, isolated, MCP-native sandbox. Your orchestrator’s job is to do the actual work inside it. The contract between the two is small and deliberate.
The model in one screen
An orchestrator is a Git repo with a warpdrive.manifest.yml at its root. WarpDrive points at that repo, copies it into the sandbox, runs your manifest’s setup.command, and then invokes the actions you’ve declared.
The full lifecycle, end to end:
- The user selects a host workspace path. WarpDrive bind-mounts it into the sandbox at
/ws. - The user selects an orchestrator Git repo + branch. WarpDrive clones it to a host cache.
- When the sandbox starts, WarpDrive copies the cached orchestrator into the sandbox at
/home/developer/.warpdrive/orchestrators/active. - WarpDrive runs
helm setup <active-orchestrator-path>. helmloadswarpdrive.manifest.yml, validates it, then runssetup.commandwithcwd = ORCHESTRATOR_PATH.- Later, the user (or the API) triggers actions. WarpDrive runs
helm exec <active-orchestrator-path> <actionId> .... - WarpDrive reads orchestrator state from a status JSON file and discovers tmux sessions by matching against your manifest’s session patterns.
That’s the entire flow. The rest of this page is the details for each piece.
The /ws contract
The single most important fact for orchestrator authors:
- WarpDrive does not give you a
HOST_ROOTorPROJECT_ROOTenv var. - The selected host workspace is mounted into the sandbox as
/ws. - Your orchestrator should treat
/wsas the workspace root for all real work.
If you need metadata about the selected host path, you can read /ws/.warpdrive/config.json. But the practical contract is /ws.
What WarpDrive owns vs what your orchestrator owns
WarpDrive owns
- Selecting and persisting workspace + orchestrator config
- Mounting the workspace at
/ws - Cloning, caching, and copying the orchestrator into the sandbox
- Running
helm setupandhelm exec - Publishing tmux state and reading status files
- Exposing API routes to clients
Your orchestrator owns
- What
setup.commanddoes - What each action does
- Which tmux sessions it creates
- What status JSON it writes
- How it interprets
/ws
The boundary in one sentence: WarpDrive provides the sandbox; the orchestrator runs the show inside it.
Manifest schema
The manifest file lives at the orchestrator repo root and must be named:
warpdrive.manifest.yml
The current contract is manifest version 2.
Minimum manifest
manifest: 2
name: my-orchestrator
setup:
command: ./setup.sh
Top-level sections
| Section | Required | Purpose |
|---|---|---|
manifest | yes | Schema version. Must be 2. |
name | yes | Orchestrator name. |
description | no | Short summary. |
version | no | Orchestrator version. |
infrastructure | no | System dependencies the orchestrator requires. |
setup | yes | The setup command run after the orchestrator lands in the sandbox. |
sessions | no | Logical tmux sessions the orchestrator creates. |
actions | no | User-invokable operations exposed via the API. |
infrastructure
Declare system tools the orchestrator depends on:
infrastructure:
requires: [bash, tmux, git]
environment:
- name: ANTHROPIC_API_KEY
required: true
description: API key for Claude calls
- name: OPENAI_API_KEY
required: false
description: Optional fallback for evaluation runs
infrastructure.requires is validated by helm setup. If a required tool is missing in the sandbox, setup fails.
setup
setup:
command: ./setup.sh
setup.command runs once after WarpDrive copies the orchestrator into the sandbox. It is non-interactive — it must not prompt for input. The working directory is the active orchestrator path; ORCHESTRATOR_PATH is set in the env to the same value.
sessions
Sessions describe the logical tmux sessions your orchestrator manages. WarpDrive uses them to surface attach points to the user.
sessions:
- name: chief
pattern: chief-session
role: primary
description: Top-level orchestrator session
- name: workers
pattern: 'worker-*'
role: worker
| Field | Required | Notes |
|---|---|---|
name | yes | Logical session name in the manifest. |
pattern | yes | Exact name (chief-session) or glob (worker-*). Matched against real tmux sessions. |
role | yes | One of primary, support, worker. |
description | no | Free text. |
WarpDrive classifies real tmux sessions by matching their names against sessions[].pattern. Your orchestrator is responsible for creating those tmux sessions on the managed socket — see the Runtime contract section below.
actions
Actions are user-invokable operations exposed through the API.
actions:
- id: open-chief
name: Open Chief
command: ./scripts/open-chief.sh
session: chief
- id: run-eval
name: Run Evaluation
command: ./scripts/run-eval.sh
async: true
params:
- name: target
type: enum
required: true
options: [project-alpha, research-bench]
- name: model
type: string
required: false
description: Override the default model
| Field | Required | Notes |
|---|---|---|
id | yes | Action ID used in API calls. |
name | yes | Display name. |
command | yes | Script run inside the sandbox via helm exec. |
description | no | Free text. |
session | no | If set, must reference a sessions[].name. WarpDrive will resolve attach info on success. |
async | no | If true, the action is fire-and-forget. |
params | no | List of parameters the action accepts. |
Action params
params:
- name: target
type: enum
required: true
options: [a, b, c]
| Field | Required | Notes |
|---|---|---|
name | yes | Param name. Becomes PARAM_<NAME> env var when the action runs. |
type | yes | One of string, enum, number, boolean. |
required | no | Default false. |
description | no | Free text. |
options | yes if type: enum | List of allowed values. |
When the API invokes an action with params, WarpDrive validates them on the host first, then helm exec turns them into PARAM_<NAME> env vars in the action’s environment.
Validation rules
manifestmust be the integer2.setup.commandis required.actions[].sessionmust reference a declaredsessions[].name.paramsoftype: enummust declareoptions.
Runtime contract
This is the environment your orchestrator can rely on.
Paths
/ws # workspace root (host bind-mount)
/ws/.warpdrive/ # WarpDrive workspace metadata
/ws/.warpdrive/orchestrator/status.json # default status file (writable)
/ws/.warpdrive/tmux/state.json # tmux state (read-only, published by WarpDrive)
/home/developer/.warpdrive/orchestrators/active # the active orchestrator checkout
Environment variables
| Variable | When | Purpose |
|---|---|---|
WARPDRIVE_SANDBOX_NAME | Always in sandbox | Sandbox identifier. |
WARPDRIVE_TMUX_SOCKET | Always in sandbox | Tmux socket name to use for managed sessions. |
WARPDRIVE_STATUS_FILE | Always in sandbox | Path your orchestrator writes status JSON to. |
ORCHESTRATOR_PATH | During helm setup and helm exec | The active orchestrator checkout path. |
PARAM_<NAME> | During helm exec action runs | Per-param values from API invocation. |
Things not to rely on
WarpDrive does not give you any of these:
HOST_ROOTPROJECT_ROOT- The raw host workspace path as an env var
- A second tmux server under your control
- Any custom commands not declared as manifest actions
If you need the host path for debugging or metadata, read /ws/.warpdrive/config.json. For real work, use /ws.
How helm setup runs
helm setup <orchestratorPath> does:
- Loads and validates
warpdrive.manifest.yml. - Validates
infrastructure.requires. - If sessions are declared, requires
WARPDRIVE_TMUX_SOCKETto be set. - Runs
setup.commandwithcwd = orchestratorPathandORCHESTRATOR_PATHinjected into the env.
How helm exec runs
helm exec <orchestratorPath> <actionId> key=value ... does:
- Loads and validates the manifest.
- Finds the action by
id. - Converts each
key=valuearg into aPARAM_<NAME>env var. - Runs the action’s
commandwithcwd = orchestratorPathandORCHESTRATOR_PATHinjected.
tmux integration
WarpDrive treats tmux as part of the orchestrator contract surface. If your manifest declares sessions, your orchestrator must create them on the managed tmux socket.
Always use the managed socket:
tmux -L "$WARPDRIVE_TMUX_SOCKET" new-session -d -s chief-session
Do not use bare tmux if you want supported behavior — you’ll get a different socket and WarpDrive won’t see your sessions.
Session classification is automatic: WarpDrive matches real tmux session names against sessions[].pattern from the manifest. The match supports exact names (chief-session) and globs (worker-*).
Open-action idiom
User-facing attach points should be explicit actions. The convention is:
- The manifest declares a session (e.g.
chief). - The orchestrator’s
setup.commanddoes not auto-create the session — setup is non-interactive. - An action like
open-chiefcreates or exposes the session, and is invoked when the user wants to attach.
When the API invokes an action that declares session: chief, WarpDrive resolves the resulting tmux session and returns attachment info to the client.
Status contract
WarpDrive reads orchestrator status from a JSON file, not by polling commands.
The default path is:
/ws/.warpdrive/orchestrator/status.json
WarpDrive injects this path as WARPDRIVE_STATUS_FILE. Your orchestrator writes to that file.
Required shape
{
"health": "healthy",
"summary": "Chief running",
"updatedAt": "2026-04-03T12:34:56Z"
}
Required fields:
health— one ofhealthy,degraded,offline.summary— short human-readable status string.updatedAt— ISO 8601 timestamp.
Optional fields (sections, metadata, etc.) are allowed.
Atomicity
Status writes must be atomic. Replace the whole file on each update — write to a temp file and rename, or use a single buffered write. If multiple writers append fragments to the same file, WarpDrive’s reader will fail because it does a strict JSON.parse() on the whole file.
If the file is missing, unreadable, or invalid, WarpDrive reports offline or degraded accordingly.
Public API
WarpDrive exposes these REST endpoints. Clients (UI, CLI, your tooling) drive the orchestrator through them.
Configuration
PUT /api/config/workspace — set the host workspace path
GET /api/config/workspace — read the current workspace path
PUT /api/config/orchestrator — set the orchestrator Git source (repo, branch)
GET /api/config/orchestrator — read the current orchestrator config
Sandbox + orchestrator lifecycle
POST /api/sandboxes/default/start — start the sandbox
POST /api/sandboxes/default/orchestrator/setup — apply the configured orchestrator
GET /api/sandboxes/default/orchestrator/status — read the parsed status JSON
GET /api/sandboxes/default/orchestrator/actions — list available actions
GET /api/sandboxes/default/orchestrator/sessions — list resolved tmux sessions
POST /api/sandboxes/default/orchestrator/actions/:actionId
— invoke an action (with optional params in body)
Action invocation
The action endpoint does not run your script directly. It runs:
helm exec /home/developer/.warpdrive/orchestrators/active <actionId> ...
over SSH inside the sandbox. WarpDrive validates params on the host, then helm exec translates them into PARAM_<NAME> env vars when running the action’s command.
If the action declares a session, WarpDrive resolves the matching tmux session after the action runs and returns attach info in the response.
Current limitation
Only the default sandbox is supported by the orchestrator API paths today. Multi-sandbox orchestration is planned but not yet exposed.
Author checklist
If you’re building or adapting an orchestrator for WarpDrive, this is the working checklist:
- Put
warpdrive.manifest.ymlat the repo root. - Use
manifest: 2. - Make
setup.commandnon-interactive — no prompts, no waits for input. - Treat
/wsas the workspace root for all real work. - Use
ORCHESTRATOR_PATH(or the current working directory) for orchestrator-local files. - Use
tmux -L "$WARPDRIVE_TMUX_SOCKET" ...for all tmux operations. - Write a single valid JSON document to
WARPDRIVE_STATUS_FILE. Make writes atomic. - Make user-facing attach/open operations explicit manifest actions (e.g.
open-chief). - If an action accepts params, read them from
PARAM_<NAME>env vars. - Don’t assume WarpDrive will call any custom runtime command unless it’s declared as an action.
TL;DR
- The orchestrator source lives in
.warpdrive/config.jsonundersandbox.orchestrator. - The selected host workspace is mounted into the sandbox as
/ws— that’s your workspace root. - The orchestrator is copied into the sandbox at
/home/developer/.warpdrive/orchestrators/active. helm setupandhelm execrun inside that active checkout. They exposeORCHESTRATOR_PATH.- tmux integration goes through
WARPDRIVE_TMUX_SOCKET. - Status integration goes through
WARPDRIVE_STATUS_FILE. - Everything else is up to your orchestrator.