Documentation

Orchestrator Integration

This guide is for orchestrator authors — people writing the runtime that runs inside a WarpDrive sandbox. If you just want to use WarpDrive with someone else’s orchestrator, you don’t need any of this.

WarpDrive’s job is to give your orchestrator a clean, isolated, MCP-native sandbox. Your orchestrator’s job is to do the actual work inside it. The contract between the two is small and deliberate.

The model in one screen

An orchestrator is a Git repo with a warpdrive.manifest.yml at its root. WarpDrive points at that repo, copies it into the sandbox, runs your manifest’s setup.command, and then invokes the actions you’ve declared.

The full lifecycle, end to end:

  1. The user selects a host workspace path. WarpDrive bind-mounts it into the sandbox at /ws.
  2. The user selects an orchestrator Git repo + branch. WarpDrive clones it to a host cache.
  3. When the sandbox starts, WarpDrive copies the cached orchestrator into the sandbox at /home/developer/.warpdrive/orchestrators/active.
  4. WarpDrive runs helm setup <active-orchestrator-path>.
  5. helm loads warpdrive.manifest.yml, validates it, then runs setup.command with cwd = ORCHESTRATOR_PATH.
  6. Later, the user (or the API) triggers actions. WarpDrive runs helm exec <active-orchestrator-path> <actionId> ....
  7. WarpDrive reads orchestrator state from a status JSON file and discovers tmux sessions by matching against your manifest’s session patterns.

That’s the entire flow. The rest of this page is the details for each piece.

The /ws contract

The single most important fact for orchestrator authors:

  • WarpDrive does not give you a HOST_ROOT or PROJECT_ROOT env var.
  • The selected host workspace is mounted into the sandbox as /ws.
  • Your orchestrator should treat /ws as the workspace root for all real work.

If you need metadata about the selected host path, you can read /ws/.warpdrive/config.json. But the practical contract is /ws.

What WarpDrive owns vs what your orchestrator owns

WarpDrive owns

  • Selecting and persisting workspace + orchestrator config
  • Mounting the workspace at /ws
  • Cloning, caching, and copying the orchestrator into the sandbox
  • Running helm setup and helm exec
  • Publishing tmux state and reading status files
  • Exposing API routes to clients

Your orchestrator owns

  • What setup.command does
  • What each action does
  • Which tmux sessions it creates
  • What status JSON it writes
  • How it interprets /ws

The boundary in one sentence: WarpDrive provides the sandbox; the orchestrator runs the show inside it.

Manifest schema

The manifest file lives at the orchestrator repo root and must be named:

warpdrive.manifest.yml

The current contract is manifest version 2.

Minimum manifest

manifest: 2
name: my-orchestrator
setup:
  command: ./setup.sh

Top-level sections

SectionRequiredPurpose
manifestyesSchema version. Must be 2.
nameyesOrchestrator name.
descriptionnoShort summary.
versionnoOrchestrator version.
infrastructurenoSystem dependencies the orchestrator requires.
setupyesThe setup command run after the orchestrator lands in the sandbox.
sessionsnoLogical tmux sessions the orchestrator creates.
actionsnoUser-invokable operations exposed via the API.

infrastructure

Declare system tools the orchestrator depends on:

infrastructure:
  requires: [bash, tmux, git]
  environment:
    - name: ANTHROPIC_API_KEY
      required: true
      description: API key for Claude calls
    - name: OPENAI_API_KEY
      required: false
      description: Optional fallback for evaluation runs

infrastructure.requires is validated by helm setup. If a required tool is missing in the sandbox, setup fails.

setup

setup:
  command: ./setup.sh

setup.command runs once after WarpDrive copies the orchestrator into the sandbox. It is non-interactive — it must not prompt for input. The working directory is the active orchestrator path; ORCHESTRATOR_PATH is set in the env to the same value.

sessions

Sessions describe the logical tmux sessions your orchestrator manages. WarpDrive uses them to surface attach points to the user.

sessions:
  - name: chief
    pattern: chief-session
    role: primary
    description: Top-level orchestrator session
  - name: workers
    pattern: 'worker-*'
    role: worker
FieldRequiredNotes
nameyesLogical session name in the manifest.
patternyesExact name (chief-session) or glob (worker-*). Matched against real tmux sessions.
roleyesOne of primary, support, worker.
descriptionnoFree text.

WarpDrive classifies real tmux sessions by matching their names against sessions[].pattern. Your orchestrator is responsible for creating those tmux sessions on the managed socket — see the Runtime contract section below.

actions

Actions are user-invokable operations exposed through the API.

actions:
  - id: open-chief
    name: Open Chief
    command: ./scripts/open-chief.sh
    session: chief
  - id: run-eval
    name: Run Evaluation
    command: ./scripts/run-eval.sh
    async: true
    params:
      - name: target
        type: enum
        required: true
        options: [project-alpha, research-bench]
      - name: model
        type: string
        required: false
        description: Override the default model
FieldRequiredNotes
idyesAction ID used in API calls.
nameyesDisplay name.
commandyesScript run inside the sandbox via helm exec.
descriptionnoFree text.
sessionnoIf set, must reference a sessions[].name. WarpDrive will resolve attach info on success.
asyncnoIf true, the action is fire-and-forget.
paramsnoList of parameters the action accepts.

Action params

params:
  - name: target
    type: enum
    required: true
    options: [a, b, c]
FieldRequiredNotes
nameyesParam name. Becomes PARAM_<NAME> env var when the action runs.
typeyesOne of string, enum, number, boolean.
requirednoDefault false.
descriptionnoFree text.
optionsyes if type: enumList of allowed values.

When the API invokes an action with params, WarpDrive validates them on the host first, then helm exec turns them into PARAM_<NAME> env vars in the action’s environment.

Validation rules

  • manifest must be the integer 2.
  • setup.command is required.
  • actions[].session must reference a declared sessions[].name.
  • params of type: enum must declare options.

Runtime contract

This is the environment your orchestrator can rely on.

Paths

/ws                                                       # workspace root (host bind-mount)
/ws/.warpdrive/                                           # WarpDrive workspace metadata
/ws/.warpdrive/orchestrator/status.json                   # default status file (writable)
/ws/.warpdrive/tmux/state.json                            # tmux state (read-only, published by WarpDrive)
/home/developer/.warpdrive/orchestrators/active           # the active orchestrator checkout

Environment variables

VariableWhenPurpose
WARPDRIVE_SANDBOX_NAMEAlways in sandboxSandbox identifier.
WARPDRIVE_TMUX_SOCKETAlways in sandboxTmux socket name to use for managed sessions.
WARPDRIVE_STATUS_FILEAlways in sandboxPath your orchestrator writes status JSON to.
ORCHESTRATOR_PATHDuring helm setup and helm execThe active orchestrator checkout path.
PARAM_<NAME>During helm exec action runsPer-param values from API invocation.

Things not to rely on

WarpDrive does not give you any of these:

  • HOST_ROOT
  • PROJECT_ROOT
  • The raw host workspace path as an env var
  • A second tmux server under your control
  • Any custom commands not declared as manifest actions

If you need the host path for debugging or metadata, read /ws/.warpdrive/config.json. For real work, use /ws.

How helm setup runs

helm setup <orchestratorPath> does:

  1. Loads and validates warpdrive.manifest.yml.
  2. Validates infrastructure.requires.
  3. If sessions are declared, requires WARPDRIVE_TMUX_SOCKET to be set.
  4. Runs setup.command with cwd = orchestratorPath and ORCHESTRATOR_PATH injected into the env.

How helm exec runs

helm exec <orchestratorPath> <actionId> key=value ... does:

  1. Loads and validates the manifest.
  2. Finds the action by id.
  3. Converts each key=value arg into a PARAM_<NAME> env var.
  4. Runs the action’s command with cwd = orchestratorPath and ORCHESTRATOR_PATH injected.

tmux integration

WarpDrive treats tmux as part of the orchestrator contract surface. If your manifest declares sessions, your orchestrator must create them on the managed tmux socket.

Always use the managed socket:

tmux -L "$WARPDRIVE_TMUX_SOCKET" new-session -d -s chief-session

Do not use bare tmux if you want supported behavior — you’ll get a different socket and WarpDrive won’t see your sessions.

Session classification is automatic: WarpDrive matches real tmux session names against sessions[].pattern from the manifest. The match supports exact names (chief-session) and globs (worker-*).

Open-action idiom

User-facing attach points should be explicit actions. The convention is:

  • The manifest declares a session (e.g. chief).
  • The orchestrator’s setup.command does not auto-create the session — setup is non-interactive.
  • An action like open-chief creates or exposes the session, and is invoked when the user wants to attach.

When the API invokes an action that declares session: chief, WarpDrive resolves the resulting tmux session and returns attachment info to the client.

Status contract

WarpDrive reads orchestrator status from a JSON file, not by polling commands.

The default path is:

/ws/.warpdrive/orchestrator/status.json

WarpDrive injects this path as WARPDRIVE_STATUS_FILE. Your orchestrator writes to that file.

Required shape

{
  "health": "healthy",
  "summary": "Chief running",
  "updatedAt": "2026-04-03T12:34:56Z"
}

Required fields:

  • health — one of healthy, degraded, offline.
  • summary — short human-readable status string.
  • updatedAt — ISO 8601 timestamp.

Optional fields (sections, metadata, etc.) are allowed.

Atomicity

Status writes must be atomic. Replace the whole file on each update — write to a temp file and rename, or use a single buffered write. If multiple writers append fragments to the same file, WarpDrive’s reader will fail because it does a strict JSON.parse() on the whole file.

If the file is missing, unreadable, or invalid, WarpDrive reports offline or degraded accordingly.

Public API

WarpDrive exposes these REST endpoints. Clients (UI, CLI, your tooling) drive the orchestrator through them.

Configuration

PUT  /api/config/workspace      — set the host workspace path
GET  /api/config/workspace      — read the current workspace path
PUT  /api/config/orchestrator   — set the orchestrator Git source (repo, branch)
GET  /api/config/orchestrator   — read the current orchestrator config

Sandbox + orchestrator lifecycle

POST /api/sandboxes/default/start                    — start the sandbox
POST /api/sandboxes/default/orchestrator/setup       — apply the configured orchestrator
GET  /api/sandboxes/default/orchestrator/status      — read the parsed status JSON
GET  /api/sandboxes/default/orchestrator/actions     — list available actions
GET  /api/sandboxes/default/orchestrator/sessions    — list resolved tmux sessions
POST /api/sandboxes/default/orchestrator/actions/:actionId
                                                     — invoke an action (with optional params in body)

Action invocation

The action endpoint does not run your script directly. It runs:

helm exec /home/developer/.warpdrive/orchestrators/active <actionId> ...

over SSH inside the sandbox. WarpDrive validates params on the host, then helm exec translates them into PARAM_<NAME> env vars when running the action’s command.

If the action declares a session, WarpDrive resolves the matching tmux session after the action runs and returns attach info in the response.

Current limitation

Only the default sandbox is supported by the orchestrator API paths today. Multi-sandbox orchestration is planned but not yet exposed.

Author checklist

If you’re building or adapting an orchestrator for WarpDrive, this is the working checklist:

  1. Put warpdrive.manifest.yml at the repo root.
  2. Use manifest: 2.
  3. Make setup.command non-interactive — no prompts, no waits for input.
  4. Treat /ws as the workspace root for all real work.
  5. Use ORCHESTRATOR_PATH (or the current working directory) for orchestrator-local files.
  6. Use tmux -L "$WARPDRIVE_TMUX_SOCKET" ... for all tmux operations.
  7. Write a single valid JSON document to WARPDRIVE_STATUS_FILE. Make writes atomic.
  8. Make user-facing attach/open operations explicit manifest actions (e.g. open-chief).
  9. If an action accepts params, read them from PARAM_<NAME> env vars.
  10. Don’t assume WarpDrive will call any custom runtime command unless it’s declared as an action.

TL;DR

  • The orchestrator source lives in .warpdrive/config.json under sandbox.orchestrator.
  • The selected host workspace is mounted into the sandbox as /ws — that’s your workspace root.
  • The orchestrator is copied into the sandbox at /home/developer/.warpdrive/orchestrators/active.
  • helm setup and helm exec run inside that active checkout. They expose ORCHESTRATOR_PATH.
  • tmux integration goes through WARPDRIVE_TMUX_SOCKET.
  • Status integration goes through WARPDRIVE_STATUS_FILE.
  • Everything else is up to your orchestrator.