Self-hosted AI Agent Mission Control platform. Build, orchestrate, and monitor AI agent experiments with a visual pipeline, human-in-the-loop approvals, and full audit trail.
Prefer not to self-host? FleetQ Cloud is the fully managed version — no setup, no infrastructure, free to try.
- Experiment Pipeline -- 20-state machine with automatic stage progression (scoring, planning, building, approval, execution, metrics collection)
- AI Agents -- Configure agents with roles, goals, backstories, personality traits, and skill assignments
- Agent Templates -- 14 pre-built templates across 5 categories (engineering, content, business, design, research)
- Agent Evolution -- AI-driven self-improvement: analyze execution history, propose config changes, and apply improvements
- Agent Crews -- Multi-agent teams with coordinator, QA, and worker roles; domain-specific evaluation rubrics; weighted QA scoring per task type
- Pre-Execution Scout Phase -- Optional cheap LLM pre-call before memory retrieval that identifies what knowledge the agent needs, enabling targeted semantic search instead of generic recall
- Step Budget Awareness -- Agents receive an execution budget section in their system prompt, targeting 80% of allowed steps for core work and reserving the rest for synthesis
- Skills -- Reusable AI skill definitions (LLM, connector, rule, hybrid, browser, RunPod, GPU compute) with versioning and cost tracking
- RunPod GPU Integration -- Invoke RunPod serverless endpoints or manage full GPU pod lifecycles as skills; BYOK API key; spot pricing; cost tracking
- Pluggable Compute Providers --
gpu_computeskill type backed by RunPod, Replicate, Fal.ai, and Vast.ai; configure viacompute_manageMCP tool; zero platform credits - Local LLM Support -- Run Ollama or any OpenAI-compatible server (LM Studio, vLLM, llama.cpp) as a provider; 17 preset Ollama models; zero cost; SSRF protection
- Integrations -- Connect GitHub, Slack, Notion, Airtable, Linear, Stripe, Vercel, Netlify, and generic webhooks/polling sources via unified driver interface with OAuth 2.0 support
- Autonomous Web Dev Pipeline -- Agents can open PRs, merge, dispatch CI workflows, create releases, and trigger Vercel/Netlify/SSH deploys through MCP tools and integration drivers
- Per-Call Working Directory -- Local and bridge agents can operate in a configured working directory per-agent, enabling isolated project contexts
- Playbooks -- Sequential or parallel multi-step workflows combining skills
- Workflows -- Visual DAG builder with 8 node types: agent, conditional, human task, switch, dynamic fork, do-while loops; pre-built Web Dev Cycle template
- Projects -- One-shot and continuous long-running agent projects with cron scheduling, budget caps, milestones, and overlap policies
- Human-in-the-Loop -- Approval queue and human task forms with SLA enforcement and escalation
- Multi-Channel Outbound -- Email (SMTP), Telegram, Slack, and webhook delivery with rate limiting
- Webhooks -- Inbound signal ingestion (HMAC-SHA256) and outbound webhook delivery with retry and event filtering
- Budget Controls -- Per-experiment and per-project credit ledger with pessimistic locking and auto-pause on overspend
- Marketplace -- Browse, publish, and install shared skills, agents, and workflows
- REST API -- 175+ endpoints under
/api/v1/with Sanctum auth, cursor pagination, and auto-generated OpenAPI 3.1 docs at/docs/api - MCP Server -- 316+ Model Context Protocol tools across 38 domains for LLM/agent access (stdio + HTTP/SSE)
- Tool Management -- MCP servers (stdio/HTTP), built-in tools (bash/filesystem/browser), risk classification, per-agent assignment
- Credentials -- Encrypted credential vault for external services with rotation, expiry tracking, and per-project injection
- Testing -- Regression test suites for agent outputs with automated evaluation
- Local Agents -- Run Codex and Claude Code as local execution backends (auto-detected, zero cost)
- Audit Trail -- Full activity logging with searchable, filterable audit log
- AI Gateway -- Provider-agnostic LLM access via PrismPHP with circuit breakers and fallback chains
- BYOK -- Bring your own API keys for Anthropic, OpenAI, or Google
- Queue Management -- Laravel Horizon with 6 priority queues and auto-scaling
git clone https://github.com/escapeboy/agent-fleet-o.git
cd agent-fleet
make installThis will:
- Copy
.env.exampleto.env - Build and start all Docker services
- Run the interactive setup wizard (database, admin account, LLM provider)
Visit http://localhost:8080 when complete.
Requirements: PHP 8.4+, PostgreSQL 17+, Redis 7+, Node.js 20+, Composer
git clone https://github.com/escapeboy/agent-fleet-o.git
cd agent-fleet
composer install
npm install && npm run build
cp .env.example .env
# Edit .env — set DB_HOST, DB_DATABASE, DB_USERNAME, DB_PASSWORD, REDIS_HOST
php artisan key:generate
php artisan migrate
php artisan horizon &
php artisan serveThen open http://localhost:8000 in your browser. The setup page will guide you through creating your admin account.
Alternative: Run
php artisan app:installfor an interactive CLI setup wizard that also seeds default agents and skills.
- No email verification — the self-hosted edition skips email verification entirely. Accounts are active immediately on registration.
- Single user — all registered users join the default workspace automatically.
If you're running FleetQ locally on your own machine and don't want to enter a password on every visit, set APP_AUTH_BYPASS=true in .env:
APP_AUTH_BYPASS=true # Auto-login as first user
APP_ENV=local # Required — bypass is disabled in productionWith bypass enabled, the app logs you in automatically on every request. A logout link is still shown but you'll be logged back in on the next page load — this is intentional.
Warning: Never set
APP_AUTH_BYPASS=trueon a server accessible from the internet.
All configuration is in .env. Key variables:
# Database (PostgreSQL required)
DB_CONNECTION=pgsql
DB_HOST=postgres
DB_DATABASE=agent_fleet
# Redis (queues, cache, sessions, locks)
REDIS_HOST=redis
REDIS_DB=0 # Queues
REDIS_CACHE_DB=1 # Cache
REDIS_LOCK_DB=2 # Locks
# LLM Providers -- at least one required for AI features
ANTHROPIC_API_KEY=
OPENAI_API_KEY=
GOOGLE_AI_API_KEY=
# Auth bypass -- local no-password mode (never use in production)
APP_AUTH_BYPASS=falseAdditional LLM keys can be configured in Settings > AI Provider Keys after login.
To use local models (Ollama, LM Studio, vLLM):
LOCAL_LLM_ENABLED=true
LOCAL_LLM_SSRF_PROTECTION=false # set false if Ollama is on a LAN IP (192.168.x.x)
LOCAL_LLM_TIMEOUT=180Then configure endpoints in Settings > Local LLM Endpoints.
Agents can execute commands on the host machine (or any remote server) via SSH using the built-in SSH tool type. This is useful for running local scripts, interacting with the filesystem, or orchestrating host-level processes from an agent.
- The platform stores SSH private keys encrypted in the Credential vault.
- An SSH Tool is configured with
host,port,username,credential_id, and an optionalallowed_commandswhitelist. - On the first connection to a host, the server's public key fingerprint is stored via TOFU (Trust On First Use). Subsequent connections verify the fingerprint — a mismatch raises an error to prevent MITM attacks.
- Manage trusted fingerprints via Settings > SSH Fingerprints or the
tool_ssh_fingerprintsMCP tool.
The containers reach the host machine via host.docker.internal, which is pre-configured in docker-compose.yml via extra_hosts: host.docker.internal:host-gateway.
Step 1 — Enable SSH on the host
| OS | Command |
|---|---|
| macOS | System Settings → General → Sharing → Remote Login → On |
| Ubuntu/Debian | sudo apt install openssh-server && sudo systemctl enable --now ssh |
| Fedora/RHEL | sudo dnf install openssh-server && sudo systemctl enable --now sshd |
| Windows | Settings → System → Optional Features → OpenSSH Server, then Start-Service sshd |
Step 2 — Generate an SSH key pair
ssh-keygen -t ed25519 -C "fleetq-agent@local" -f ~/.ssh/fleetq_agent_key -N ""Step 3 — Authorize the key on the host
cat ~/.ssh/fleetq_agent_key.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keysStep 4 — Create a Credential in FleetQ
Navigate to Credentials → New Credential:
- Type:
SSH Key - Paste the contents of
~/.ssh/fleetq_agent_key(private key)
Or via API:
curl -X POST http://localhost:8080/api/v1/credentials \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Host SSH Key",
"credential_type": "ssh_key",
"secret_data": {"private_key": "<contents of fleetq_agent_key>"}
}'Step 5 — Create an SSH Tool
Navigate to Tools → New Tool → Built-in → SSH Remote, or via API:
curl -X POST http://localhost:8080/api/v1/tools \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Host SSH",
"type": "built_in",
"risk_level": "destructive",
"transport_config": {
"kind": "ssh",
"host": "host.docker.internal",
"port": 22,
"username": "your-username",
"credential_id": "<credential-id>",
"allowed_commands": ["ls", "pwd", "whoami", "uname", "date", "df"]
},
"settings": {"timeout": 30}
}'Step 6 — Assign the tool to an agent
In the Agent detail page, go to Tools and assign the SSH tool. The agent will now have an ssh_execute function available during execution.
The platform enforces a multi-layer security hierarchy for bash and SSH commands:
- Platform-level — always blocked:
rm -rf /,mkfs,shutdown,reboot, pipe-to-shell patterns - Organization-level — configure in Settings → Security Policy or via the
tool_bash_policyMCP tool - Tool-level —
allowed_commandswhitelist in the tool's transport config - Project-level — additional restrictions in project settings
- Agent-level — per-agent overrides on the tool pivot
More restrictive layers always win. A command blocked at the platform level cannot be unblocked by any other layer.
Trusted host fingerprints are viewable and removable via:
- API:
GET /api/v1/ssh-fingerprints/DELETE /api/v1/ssh-fingerprints/{id} - MCP:
tool_ssh_fingerprintswithlistordeleteaction
Remove a fingerprint when a host's SSH key is legitimately rotated — the next connection will re-verify via TOFU.
Built with Laravel 12, Livewire 4, and Tailwind CSS. Domain-driven design with 16 bounded contexts:
| Domain | Purpose |
|---|---|
| Agent | AI agent configs, execution, personality, evolution |
| Crew | Multi-agent teams with lead/member roles |
| Experiment | Pipeline, state machine, playbooks |
| Signal | Inbound data ingestion |
| Outbound | Multi-channel delivery |
| Approval | Human-in-the-loop reviews and human tasks |
| Budget | Credit ledger, cost enforcement |
| Metrics | Measurement, revenue attribution |
| Audit | Activity logging |
| Skill | Reusable AI skill definitions |
| Tool | MCP servers, built-in tools, risk classification |
| Credential | Encrypted external service credentials |
| Workflow | Visual DAG builder, graph executor |
| Project | Continuous/one-shot projects, scheduling |
| Assistant | Context-aware AI chat with 28 tools |
| Marketplace | Skill/agent/workflow sharing |
| Integration | External service connectors (GitHub, Slack, Notion, Airtable, Linear, Stripe, Generic) |
| Service | Purpose | Port |
|---|---|---|
| app | PHP 8.4-fpm | -- |
| nginx | Web server | 8080 |
| postgres | PostgreSQL 17 | 5432 |
| redis | Cache/Queue/Sessions | 6379 |
| horizon | Queue workers | -- |
| scheduler | Cron jobs | -- |
| vite | Frontend dev server | 5173 |
make start # Start services
make stop # Stop services
make logs # Tail logs
make update # Pull latest + migrate
make test # Run tests
make shell # Open app container shellOr with Docker Compose directly:
docker compose exec app php artisan tinker # REPL
docker compose exec app php artisan test # Run tests
docker compose exec app php artisan migrate # Run migrationsmake updateThis pulls the latest code, rebuilds containers, runs migrations, and clears caches.
- Framework: Laravel 12 (PHP 8.4)
- Database: PostgreSQL 17
- Cache/Queue: Redis 7
- Frontend: Livewire 4 + Tailwind CSS 4 + Alpine.js
- AI Gateway: PrismPHP
- Queue: Laravel Horizon
- Auth: Laravel Fortify (2FA) + Sanctum (API tokens)
- Audit: spatie/laravel-activitylog
- API Docs: dedoc/scramble (OpenAPI 3.1)
- MCP: laravel/mcp (Model Context Protocol)
Contributions are welcome. Please open an issue first to discuss proposed changes.
- Fork the repository
- Create a feature branch (
git checkout -b feat/my-feature) - Make your changes and add tests
- Run
php artisan testto verify - Submit a pull request
FleetQ Community Edition is open-source software licensed under the GNU Affero General Public License v3.0.











