AI orchestration framework: deterministic pipelines around AI CLI tools
OpenExec is a single-binary orchestration layer that wraps existing AI CLI tools (Claude Code, Codex, Gemini CLI) with deterministic infrastructure: structured pipelines, quality gates, checkpointing, and memory. It does not implement its own LLM clients -- it spawns subprocesses for the CLIs you already use.
openexec init # Configure project (model, settings)
openexec run # Execute tasks via blueprint pipeline
Execution Flow:
CLI -> Manager -> Pipeline -> Blueprint Engine -> AI CLI (claude/codex/gemini)
|
gather_context -> implement -> lint -> test -> review
| Mode | Description | Side Effects |
|---|---|---|
| Chat | Conversational, no side effects | None |
| Task | Scoped action, produces artifacts | Creates files/patches |
| Run | Blueprint execution over task | Full automation |
| CLI | Provider | Installation |
|---|---|---|
claude |
Anthropic | npm install -g @anthropic-ai/claude-code |
codex |
OpenAI | npm install -g @openai/codex |
gemini |
(Google's CLI tool) |
OpenExec resolves model names to CLI commands automatically. Claude models spawn claude, OpenAI models spawn codex, Gemini models spawn gemini.
Core (always on):
- Blueprint Execution: 5-stage pipeline (gather_context -> implement -> lint -> test -> review)
- Multi-Model Support: Claude, Codex, Gemini via their CLI tools
- Deterministic Routing: Keyword-based task classification (mode, toolset, repo zones, sensitivity)
- Backward Compatibility: Legacy
.uaos/project format still supported
Opt-in (via .openexec/config.json):
- BitNet Routing: Local 1-bit LLM for enhanced intent classification, auto-downloads model
- Quality Gates V2: Auto-detects project type (Go/Python/TS/Rust), runs lint/test/format gates
- Checkpointing: Deterministic checkpoints after each stage for crash recovery
- Memory System: Extracts learning patterns from completed stages, injects context in future runs
- Predictive Loading: Pre-fetches likely-needed files based on task description
- Caching: Knowledge cache and tool result cache to avoid redundant work
- Multi-Agent Parallel: Split large tasks across parallel workers (when
worker_count > 1)
Infrastructure:
- MCP Server: JSON-RPC tool server with read_file, write_file, git_apply_patch, run_shell_command
- Web UI: React/Vite dashboard (embedded in binary)
- Terminal UI: Bubble Tea TUI
{
"execution": {
"quality_gates_v2": true,
"cache_enabled": true,
"predictive_load": true,
"memory_enabled": true,
"checkpoint_enabled": true,
"bitnet_routing": true,
"worker_count": 4
}
}Install at least one AI CLI:
# Install Claude Code (recommended)
npm install -g @anthropic-ai/claude-code
# Or install Codex
npm install -g @openai/codex
# Or install Gemini CLI
# (follow Google's installation instructions)Download the latest binary for your platform, or use the automated script:
curl -sSfL https://openexec.io/install.sh | shopenexec init # Set up project and AI models
openexec wizard # Define goal, generates INTENT.md
openexec run # Execute blueprint pipeline
openexec chat # Conversational mode
openexec doctor # Verify CLI tools and configurationopenexec/
├── cmd/openexec/ # CLI entry point
├── internal/
│ ├── blueprint/ # Stage-based execution engine
│ ├── cache/ # Multi-level caching
│ ├── checkpoint/ # Crash recovery
│ ├── cli/ # Cobra commands
│ ├── context/ # Two-stage context assembly
│ ├── dcp/ # Deterministic Control Plane (tool routing)
│ ├── harness/ # Integrated orchestration
│ ├── loop/ # CLI process management
│ ├── mcp/ # Model Context Protocol server
│ ├── memory/ # Pattern learning
│ ├── parallel/ # Multi-agent coordination
│ ├── predictive/ # File pre-loading
│ ├── quality/ # Lint/test gates
│ ├── router/ # BitNet + keyword routing
│ ├── runner/ # Model -> CLI resolution
│ ├── toolset/ # Toolset definitions and registry
│ ├── tui/ # Terminal UI (Bubble Tea)
│ └── validation/ # E2E and compatibility tests
├── pkg/
│ ├── agent/ # AI provider adapters
│ ├── manager/ # Multi-pipeline orchestrator
│ └── api/ # HTTP handlers and WebSocket
├── ui/ # Web UI (React/Vite)
├── agents/ # Personas, workflows, manifests
└── docs/ # Documentation
See CONTRIBUTING.md for guidelines.
Single-binary AI orchestration. Go + React.