Welcome! This is a simple bridge (plugin) that connects OpenSpec to your favorite AI coding assistant (like IBM Bob, Codex, or Claude Desktop).
When you want your AI to build a new feature, you usually just type it into the chat. But as projects grow, the AI can forget things, get confused, or write messy code.
OpenSpec is a system that solves this. It forces the AI to create a clear "specification" (a plan) before it writes any code. It organizes your plan into neat folders (proposal, design, tasks) so you can review it.
However, your AI doesn't automatically know how to use OpenSpec. That is what this server does! It gives your AI the "tools" it needs to automatically create these folders, list tasks, and mark them as complete as it writes code for you.
To use this, you need to tell your AI assistant where this server is located. The setup simply depends on which AI assistant you use.
If you are using IBM Bob:
- In the IBM Bob IDE, click the three dots next to the gear icon in the upper right corner of the chat window and select MCP servers.
- Click Open next to "Global MCPs" to edit your settings file (usually saved at
~/.bob/settings/mcp_settings.json). - Add the
openspecserver to themcpServersobject:
{
"mcpServers": {
"openspec": {
"command": "npx",
"args": [
"-y",
"@igor-olikh/openspec-mcp-server"
]
}
}
}- Save the file and restart IBM Bob!
Codex has a built-in user interface to easily add these plugins. You can add it quickly by running this terminal command:
codex mcp add openspec-server npx -y @igor-olikh/openspec-mcp-serverOr, manually through the Codex User Interface:
- Open the "Connect to a custom MCP" box in Codex.
- Name:
openspec - Mode: Leave as
STDIO - Command to launch:
npx - Arguments: Click
+ Add argumenttwice and paste exactly:- First argument:
-y - Second argument:
@igor-olikh/openspec-mcp-server
- First argument:
- Working directory: Leave this blank! (This allows Codex to dynamically use OpenSpec inside whichever project you currently have open).
- Save it!
If you prefer using the Claude Desktop application:
- Open your Claude configuration file (usually located at
~/Library/Application Support/Claude/claude_desktop_config.jsonon Mac). - Add the
openspecserver to it:
{
"mcpServers": {
"openspec": {
"command": "npx",
"args": [
"-y",
"@igor-olikh/openspec-mcp-server"
]
}
}
}- Save the file and restart Claude Desktop.
Once connected, you don't need to do anything technical. You just talk to your AI like normal, but ask it to use OpenSpec!
Example Chat Prompts:
- "Hey Bob, I want to add a dark mode feature to this application. Please use OpenSpec to propose and validate it."
- "What is the OpenSpec status of our current project?"
- "List all the OpenSpec changes we are currently working on."
The AI will automatically use the tools below to handle the rest!
This server exposes the official @fission-ai/openspec CLI commands as Model Context Protocol (MCP) JSON-RPC tools.
Available AI Tools:
openspec_init: Starts OpenSpec in a project.openspec_new_change: Creates a folder for a new feature proposal.openspec_status: Checks how much of the feature is done.openspec_validate: Checks if the code matches the plan.openspec_archive: Marks the feature as 100% completed.openspec_list: Shows all current tasks.openspec_show: Reads a specific task.openspec_update: Updates OpenSpec rules.openspec_instructions: Reads AI instructions for building parts of the plan.openspec_read_file: Reads any spec artifact directly by name and file type — much faster thanshow, no subprocess overhead.openspec_refresh_cache: Force-refreshes the cached directory listing if files changed outside OpenSpec tools.
Built-in Prompts:
openspec_kickoff: A pre-made prompt that steers the AI into a strict spec-driven workflow from the first turn. Automatically injected when supported by the AI assistant.
If you want to modify this server's code:
npm install(Installs dependencies)npm run build(Compiles the code)npm run start(Runs the server to test standard input/output)
- Built-in MCP Prompts: The
openspec_kickoffprompt automatically steers your AI into a strict spec-driven workflow from the first turn. - Direct File Readers with In-Memory Cache: New
openspec_read_filetool reads spec artifacts directly via the filesystem, bypassing CLI subprocess overhead. An in-memory cache of the directory structure serveslistqueries in under 1ms. Cache auto-refreshes after any mutating operation.
- Structured JSON Outputs: Replacing raw terminal output with parsed JavaScript objects, preventing the AI from misreading states and reducing hallucinations.
- Smart Error Handling: Coaching the LLM when OpenSpec validations fail (e.g. intercepting terminal errors to output: "Hey, you forgot the 'Tasks' header in design.md").