Architecture

Understand how the local store, MCP bridge, and daemon stay aligned so you can hand work off and still trust the loop.

Runtime layers

The current codebase is organized around five connected runtime layers:

  • a SQLite-backed local store that persists projects, issues, workspaces, sessions, commands, and runtime events
  • a provider service that reads local kanban projects and syncs limited linear projects into that local store
  • an orchestrator plus agent runner that turn queued issues into workspaces and Codex sessions
  • a private MCP daemon that maestro mcp bridges over stdio
  • a public HTTP server that serves the embedded dashboard UI plus JSON and WebSocket APIs on the default port unless you override --port

How work moves through the live loop

The shortest operational view of the system: attach path, optional HTTP surface, local store, provider sync, and agent execution.

This diagram is derived from the current runtime shape in cmd/maestro and the internal runtime packages, then captured as a static asset for the docs site.

Screenshot of the live loop architecture diagram showing MCP and HTTP entry points flowing into the Maestro runtime, which then coordinates providers, orchestration, storage, and the agent runner.

Local store first

Even when a project uses a provider such as Linear, Maestro still supervises work through the same local store. Provider-backed issues are synchronized into the SQLite database, then flow through the same queue state, runtime events, sessions, dashboard views, and MCP tools as local kanban issues.

WORKFLOW.md still controls orchestration behavior, and its tracker.kind remains kanban. Project-level provider selection is a separate concern from workflow orchestration config.

MCP attach model

maestro run is the long-lived daemon for a given database. It owns:

  • the SQLite store and runtime persistence
  • the provider service and orchestrator runtime
  • the private loopback-only MCP transport endpoint
  • the public HTTP server and embedded dashboard, which default to port 8787 unless you override --port

maestro mcp does not start a separate daemon. It attaches to the live maestro run process for the same store and bridges that daemon over stdio for MCP clients.

Operationally, that means:

  • start maestro run first
  • point maestro mcp at the same --db
  • expect an explicit error if no live daemon exists for that store

Workflow-driven orchestration

WORKFLOW.md is the repo-local source of truth for:

  • tracker settings
  • workspace root
  • hook commands and timeout
  • agent concurrency, mode, and dispatch behavior
  • optional review and done phase prompts
  • Codex command and sandbox settings
  • the prompt template rendered for each issue

The orchestrator does not guess its way around missing repo context. It reads the workflow file, then turns local queue state into per-issue workspaces and agent runs.

When a project is provider-backed, the provider layer refreshes those issues into the local store first, and the orchestrator still runs against the synchronized local view.

Provider support

Projects can currently use:

  • kanban for fully local tracking
  • linear for limited project-backed issue sync and mutation

Linear support is intentionally limited:

  • provider-backed projects are supported
  • issue sync and issue state updates are supported
  • assignee filtering is supported through project provider config
  • epics are not supported
  • some create and update flows reject labels, blockers, or project reassignment

Why the architecture stays local-first

Keeping the architecture local-first makes the system easier to inspect, debug, and trust.

  • the control plane stays on your machine, so the automation loop remains close to the code and config you are actually running
  • the observability surface is plain HTTP JSON plus an embedded dashboard, so you can understand system state with familiar tools
  • extensions stay as local shell commands, so customization remains easy to version, audit, and replace

That keeps the operational footprint small and makes the full loop easier to reason about from a single repo checkout.