Operations and observability

Inspect the live daemon, verify the handoff path, and debug queue behavior without guessing what the loop is doing.

HTTP surfaces

This page is for the moments when the loop is already running and you need evidence fast. These commands and endpoints tell you what Maestro is doing without forcing you to reconstruct the state by hand.

Start the daemon:

maestro run

If --db is omitted, run uses ~/.maestro/maestro.db. If --port is omitted, run serves HTTP on 8787.

Maestro serves the embedded dashboard plus two related API surfaces. Together they give you a fast path to check whether work is moving normally or whether you need to step in.

Live observability API

These endpoints power the CLI helpers that talk to a running daemon over --api-url:

EndpointPurpose
GET /healthProcess health and timestamp
GET /api/v1/stateLive orchestrator status payload
GET /api/v1/<issue_identifier>Single issue status payload
GET /api/v1/sessionsAll live app-server sessions
GET /api/v1/sessions?issue=ISS-1Single session lookup by issue identifier
GET /api/v1/events?since=0&limit=100In-memory event feed
POST /api/v1/refreshRequest a refresh event
GET /api/v1/dashboardCombined snapshot of state, sessions, and recent events

maestro status --dashboard is a CLI formatter over this live API. It is not the same thing as the dashboard application API.

Dashboard application API

These endpoints back the embedded UI:

Endpoint familyPurpose
GET /api/v1/app/bootstrapCombined dashboard bootstrap payload
/api/v1/app/projectsProject list, create, update, delete, run, and stop actions
/api/v1/app/epicsEpic list and CRUD flows
/api/v1/app/issuesIssue list and CRUD flows
/api/v1/app/issues/:identifier/executionPer-issue execution detail
/api/v1/app/issues/:identifier/imagesMultipart image upload
/api/v1/app/issues/:identifier/images/:imageIDImage delete
/api/v1/app/issues/:identifier/images/:imageID/contentStream a stored image
/api/v1/app/issues/:identifier/stateIssue state changes
/api/v1/app/issues/:identifier/blockersBlocker updates
/api/v1/app/issues/:identifier/commandsAgent follow-up commands
/api/v1/app/issues/:identifier/retryImmediate retry request
/api/v1/app/issues/:identifier/run-nowImmediate recurring trigger
/api/v1/app/runtime/eventsPersisted runtime event feed
/api/v1/app/runtime/seriesRuntime series charts
/api/v1/app/sessionsSession list for the embedded dashboard

WebSocket invalidation

EndpointPurpose
GET /api/v1/wsDashboard invalidate stream used to refetch live data

Commands that talk to a running daemon over HTTP require --api-url, including status --dashboard, sessions, events, project start, and project stop. The embedded dashboard does not because it talks to the colocated server directly.

Local issue images

Issue attachments are local-first storage, not provider attachments. Maestro stores them under an asset root derived from the active database directory:

  • default DB path ~/.maestro/maestro.db stores images in ~/.maestro/assets/images
  • custom DB path /path/to/maestro.db stores images in /path/to/assets/images
  • files are served back through the HTTP API; the dashboard never exposes raw filesystem paths

Operational limits:

  • local-only for both kanban and Linear-backed issues
  • supported types: PNG, JPEG, WEBP, GIF
  • maximum size: 10 MiB per image

CLI flow:

maestro issue images add ISS-1 ./screenshots/failing-checkout.png
maestro issue images list ISS-1
maestro issue images remove ISS-1 <image_id>

Recurring issues

Recurring issues are first-class local issues with issue_type=recurring. They add:

  • cron
  • enabled
  • next_run_at
  • last_enqueued_at
  • pending_rerun

Cron schedules use the daemon host’s local timezone with 5-field minute granularity.

Example:

maestro issue create "Sync GitHub ready-to-work" --project <project_id> --type recurring --cron "*/15 * * * *" --desc "Check GitHub issues labeled ready-to-work and create corresponding Maestro issues when missing."
maestro issue run-now ISS-42 --api-url http://127.0.0.1:8787

Behavior guarantees:

  • the scheduler reuses the same queue, retries, runtime events, MCP tools, and dashboard surfaces as normal issues
  • a daemon restart or outage produces at most one catch-up enqueue
  • recurring work never overlaps; extra schedule hits become a single pending rerun
  • cancelled and enabled=false suppress future scheduling without deleting the issue

Repo scope

maestro run and maestro run [repo_path] are different modes.

  • maestro run is unscoped. It runs against the current database, does not infer a repo from the shell working directory, and dispatches work using each project’s stored repo_path and workflow_path.
  • maestro run /absolute/path/to/repo scopes the daemon to one repo. Provider sync filters to that repo, MCP and dashboard project mutations must use that same repo path, and project-less issues resolve against that scoped WORKFLOW.md.

Use the unscoped form when one daemon should supervise multiple registered projects in the same database. Use the scoped form when you want one daemon to stay pinned to a single repo.

Workflow bootstrap and checks

Use these commands differently:

  • maestro workflow init [repo_path] creates WORKFLOW.md explicitly
  • maestro run [repo_path] bootstraps a missing file automatically
  • maestro verify [--repo <path>] [--db <path>] [--json] checks readiness and returns remediation guidance; it does not bootstrap
  • maestro doctor [--repo <path>] [--db <path>] [--json] runs the same readiness checks with a different presentation
  • maestro spec-check [--repo <path>] [--json] is non-mutating and fails if the workflow file is missing or invalid

verify is the readiness check. spec-check is the lightweight conformance check.

Extensions file

Only maestro run loads extension tools via --extensions.

maestro mcp inherits whatever tool set the live daemon started with. It rejects --extensions so the stdio bridge cannot drift away from the daemon it is attached to.

Each extension entry supports:

  • name
  • description
  • command
  • timeout_sec
  • allowed
  • working_dir
  • require_args
  • deny_env_passthrough

Example:

[
  {
    "name": "echo_issue",
    "description": "Print the args object for debugging",
    "command": "jq -r . <<< \"$MAESTRO_ARGS_JSON\"",
    "timeout_sec": 10,
    "require_args": true
  }
]

At runtime, Maestro passes MAESTRO_TOOL_NAME and MAESTRO_ARGS_JSON into the shell command environment.

Logs

Write structured JSON logs to both stdout and a rotating file sink:

./maestro --log-level info run /path/to/repo --logs-root ./log
./maestro --log-level debug run /path/to/repo --logs-root ./log --log-max-bytes 1048576 --log-max-files 5

Important behavior:

  • the main log file is maestro.log
  • rotation is size-based
  • debug includes raw app-server stream output
  • info keeps logs focused on lifecycle and status transitions

Docker

docker pull ghcr.io/olhapi/maestro:latest
docker run --rm -v ./repo:/repo -v ./data:/data ghcr.io/olhapi/maestro:latest run --db /data/maestro.db /repo --port 8787

The container entrypoint is maestro. If you omit explicit arguments, the image defaults to maestro run --db /data/maestro.db.

If maestro mcp runs in a different environment from maestro run, both processes must share the same database path and daemon registry location.

Deliberate scope

Current scope boundaries:

  • Maestro stays local-first even when a project syncs issues from Linear
  • issue images stay local and are never pushed back to Linear or another upstream tracker
  • no Phoenix-style live observability layer
  • no separate plugin runtime for extensions