Issue:
Unity MCP is great for Editor control - scenes, GameObjects, scripts, assets, tests. The execute_code tool with Roslyn gives runtime reflection and live C# execution, which is genuinely powerful.
But there's a gap between what execute_code can do and what a full Roslyn Compilation with SemanticModel can do, specifically:
- "Who calls this method?" - currently grep or manual reflection scan. Misses delegates, indirect calls,
nameof().
- "What breaks if I change this?" - currently manual multi-hop grep. Stops at 2 hops because it's tedious. The 3rd-hop breaker surprises you at compile time.
- "What implements this interface?" - grep finds
class Foo : IBar but misses implementations through base classes.
- "Rename this symbol safely" - currently text find-replace. Dangerous on common names like
State or Count.
- "Does this compile?" - currently
refresh_unity + read_console. Works, but triggers a full domain reload (2-8s, resets editor state).
These operations need a wired Compilation with resolved assembly referencesnot just SyntaxTree parsing.
What I built
Lifeblood is an open-source (AGPL v3) compiler-as-a-service framework. It loads a C# project via Roslyn, builds the full compilation graph, and exposes 16 tools over MCP (stdio JSON-RPC):
Read-side (semantic graph):
- analyze, context, lookup, dependencies, dependants, blast_radius
Write-side (compiler services):
- execute, diagnose, compile_check, find_references, find_definition, find_implementations, symbol_at_position, documentation, rename, format
All backed by real CSharpCompilation + SemanticModel. Not grep, not reflection, but compiler truth.
Tested on a real Unity project
I tested this on my own Unity project - 400k+ LOC, 75+ assemblies, 100+ asmdef files. The numbers:
- ~15,000 symbols, ~40,000 edges extracted
- Peak memory: ~4GB (streaming compilation with downgrading - each module is compiled, extracted, then
Emit() → MetadataReference.CreateFromImage() so only one full compilation is in memory at a time)
- 241 tests, 21 bugs found and fixed through dogfooding, all Roslyn capabilities at Proven confidence
How it integrates with Unity MCP
I already built a working bridge. It uses your [McpForUnityTool] custom tool mechanism — no fork, no changes to Unity MCP's code.
Architecture:
Claude Code ──→ Unity MCP (action/control plane)
│
├── built-in tools (scenes, GameObjects, scripts...)
│
└── [McpForUnityTool] custom tools ──→ Lifeblood MCP (child process)
└── 16 semantic tools
Lifeblood runs as a sidecar child process - separate .NET process, separate Roslyn workspace. No assembly conflicts with Unity, no domain reload interference, no memory pressure on the Editor.
The bridge is 3 files:
LifebloodBridge.asmdef - Editor-only assembly referencing MCPForUnity.Editor
LifebloodBridgeClient.cs - spawns and manages the Lifeblood server process, JSON-RPC over stdin/stdout
LifebloodTools.cs - 16 [McpForUnityTool] classes, each a thin proxy to the sidecar
Each tool auto-registers via your existing discovery mechanism. The AI agent calls lifeblood_analyze_project once to load the semantic state, then uses any of the 16 tools alongside Unity MCP's built-in tools.
What this gives AI agents working in Unity
| Operation |
Without Lifeblood |
With Lifeblood |
| Find all callers |
grep + manual reflection |
One call, compiler-verified |
| Impact analysis |
Manual 2-hop trace (~40% confidence) |
Transitive BFS (~95% confidence) |
| Interface implementors |
grep (misses base class chains) |
Semantic walk (~99%) |
| Safe rename |
Text find-replace (~60% safe) |
Scoped to exact symbol (~99%) |
| Compile check |
Domain reload (2-8s, side effects) |
Standalone check, no reload |
| Go-to-definition |
grep for class name |
Resolves through interfaces, partials |
The biggest practical win: the refactoring safety loop - blast_radius → find_implementations → rename → compile_check. Currently 15-30 minutes of manual grep chains with ~60% confidence. With Lifeblood it's 4 tool calls with ~97% confidence.
What I'm proposing
Not a PR or a merge - just awareness and maybe a link.
Options:
- Mention Lifeblood as a companion project in your docs/README - "for deeper code intelligence, pair with a semantic sidecar like Lifeblood"
- Add a
code-intelligence tool group - so semantic tools from external engines get proper group visibility alongside your built-in groups
- Nothing - the custom tool mechanism already works, anyone can build bridges like this, but Lifeblood is special because it is language independent and build as a future framework.
I'm happy with any of these. The custom tool extension mechanism you built is exactly right for this kind of integration - clean separation between Editor control and code intelligence.
Links
Appreciate the work you've done on Unity MCP. The [McpForUnityTool] extension mechanism is well designed and has opened new horizons for me especially in cobination with Roslyn dlls
Issue:
Unity MCP is great for Editor control - scenes, GameObjects, scripts, assets, tests. The
execute_codetool with Roslyn gives runtime reflection and live C# execution, which is genuinely powerful.But there's a gap between what
execute_codecan do and what a full RoslynCompilationwithSemanticModelcan do, specifically:nameof().class Foo : IBarbut misses implementations through base classes.StateorCount.refresh_unity+read_console. Works, but triggers a full domain reload (2-8s, resets editor state).These operations need a wired
Compilationwith resolved assembly referencesnot justSyntaxTreeparsing.What I built
Lifeblood is an open-source (AGPL v3) compiler-as-a-service framework. It loads a C# project via Roslyn, builds the full compilation graph, and exposes 16 tools over MCP (stdio JSON-RPC):
Read-side (semantic graph):
Write-side (compiler services):
All backed by real
CSharpCompilation+SemanticModel. Not grep, not reflection, but compiler truth.Tested on a real Unity project
I tested this on my own Unity project - 400k+ LOC, 75+ assemblies, 100+ asmdef files. The numbers:
Emit()→MetadataReference.CreateFromImage()so only one full compilation is in memory at a time)How it integrates with Unity MCP
I already built a working bridge. It uses your
[McpForUnityTool]custom tool mechanism — no fork, no changes to Unity MCP's code.Architecture:
Lifeblood runs as a sidecar child process - separate .NET process, separate Roslyn workspace. No assembly conflicts with Unity, no domain reload interference, no memory pressure on the Editor.
The bridge is 3 files:
LifebloodBridge.asmdef- Editor-only assembly referencingMCPForUnity.EditorLifebloodBridgeClient.cs- spawns and manages the Lifeblood server process, JSON-RPC over stdin/stdoutLifebloodTools.cs- 16[McpForUnityTool]classes, each a thin proxy to the sidecarEach tool auto-registers via your existing discovery mechanism. The AI agent calls
lifeblood_analyze_projectonce to load the semantic state, then uses any of the 16 tools alongside Unity MCP's built-in tools.What this gives AI agents working in Unity
The biggest practical win: the refactoring safety loop -
blast_radius → find_implementations → rename → compile_check. Currently 15-30 minutes of manual grep chains with ~60% confidence. With Lifeblood it's 4 tool calls with ~97% confidence.What I'm proposing
Not a PR or a merge - just awareness and maybe a link.
Options:
code-intelligencetool group - so semantic tools from external engines get proper group visibility alongside your built-in groupsI'm happy with any of these. The custom tool extension mechanism you built is exactly right for this kind of integration - clean separation between Editor control and code intelligence.
Links
Assets/Editor/LifebloodBridge/in my Unity projectAppreciate the work you've done on Unity MCP. The
[McpForUnityTool]extension mechanism is well designed and has opened new horizons for me especially in cobination with Roslyn dlls