Like marimo algae drifting in crystal waters, your data flows and evolves β each cell a living sphere of computation, gently touching others, creating ripples of reactive change. In this digital ocean, data streams like currents, models grow like organic formations, and insights emerge naturally from the depths. Let your ML experiments flow freely, tracked and nurtured, as nature intended.
marimo-flow.mp4
Marimo Flow combines reactive notebook development with AI-powered assistance and robust ML experiment tracking:
- π€ AI-First Development with MCP: Model Context Protocol (MCP) integration brings live documentation, code examples, and AI assistance directly into your notebooks - access up-to-date library docs for Marimo, Polars, Plotly, and more without leaving your workflow
- π Reactive Execution: Marimo's dataflow graph ensures your notebooks are always consistent - change a parameter and watch your entire pipeline update automatically
- π Seamless ML Pipeline: MLflow integration tracks every experiment, model, and metric without breaking your flow
- π― Interactive Development: Real-time parameter tuning with instant feedback and beautiful visualizations
This combination eliminates the reproducibility issues of traditional notebooks while providing AI-enhanced, enterprise-grade experiment tracking.
- Model Context Protocol Integration: Live documentation and AI assistance in your notebooks
- Context7 Server: Access up-to-date docs for any Python library without leaving marimo
- Marimo MCP Server: Specialized assistance for marimo patterns and best practices
- Local LLM Support: Ollama integration for privacy-focused AI code completion
- π Reactive Notebooks: Git-friendly
.pynotebooks with automatic dependency tracking - π¬ MLflow Tracking: Complete ML lifecycle management with model registry
- π― Interactive Development: Real-time parameter tuning with instant visual feedback
- πΎ SQLite Backend: Lightweight, file-based storage for experiments
- Docker: docker-compose setup with CPU, CUDA, and XPU image variants
- π§ PINA Integration: Physics-informed neural networks with Walrus foundation model
- π MCP-Powered Docs: Live documentation via Context7 and Marimo MCP servers
# Clone repository
git clone https://github.com/synapticore-io/marimo-flow.git
cd marimo-flow
# Build and start services
docker compose -f docker/docker-compose.yaml up --build -d
# Access services
# Marimo: http://localhost:2718
# MLflow: http://localhost:5000
# View logs
docker compose -f docker/docker-compose.yaml logs -f
# Stop services
docker compose -f docker/docker-compose.yaml down| Variant | Image Tag | Use Case |
|---|---|---|
| CPU | ghcr.io/synapticore-io/marimo-flow:latest |
No GPU (lightweight) |
| CUDA | ghcr.io/synapticore-io/marimo-flow:cuda |
NVIDIA GPUs |
| XPU | ghcr.io/synapticore-io/marimo-flow:xpu |
Intel Arc/Data Center GPUs |
# NVIDIA GPU (requires nvidia-docker)
docker compose -f docker/docker-compose.cuda.yaml up -d
# Intel GPU (requires Intel GPU drivers)
docker compose -f docker/docker-compose.xpu.yaml up -d# Install dependencies
uv sync
# Start MLflow server (in background or separate terminal)
uv run mlflow server \
--host 0.0.0.0 \
--port 5000 \
--backend-store-uri sqlite:///data/experiments/db/mlflow.db \
--default-artifact-root ./data/experiments/artifacts \
--serve-artifacts
# Start Marimo (in another terminal)
uv run marimo edit examples/All notebooks live in examples/ and can be opened with uv run marimo edit examples/<file>.py.
01_pina_poisson_solver.pyβ Solve the Poisson equation with baseline PINNs or the Walrus foundation model. Training is tracked in MLflow with integrated Optuna sweep analytics and experiment history.
marimo-flow/
βββ examples/ # Marimo notebooks
β βββ 01_pina_poisson_solver.py
βββ src/marimo_flow/ # Installable package
β βββ core/ # PINA solvers, training, visualization
βββ docs/ # Project documentation
βββ docker/ # Dockerfiles + compose (CPU, CUDA, XPU)
βββ data/mlflow/ # MLflow storage (artifacts, db)
βββ pyproject.toml # Dependencies
from marimo_flow.core import (
ModelFactory, # Create PINA neural network models
ProblemManager, # Define PDE problems and domains
SolverManager, # Configure PINN / SAPINN solvers
WalrusAdapter, # Walrus foundation model adapter
build_optuna_history_figure,
build_optuna_param_importance_figure,
build_optuna_parallel_figure,
)Marimo Flow is AI-first with built-in Model Context Protocol (MCP) support for intelligent, context-aware development assistance.
Traditional notebooks require constant context-switching to documentation sites. With MCP:
- π Live Documentation: Access up-to-date library docs directly in marimo
- π€ AI Code Completion: Context-aware suggestions from local LLMs (Ollama)
- π‘ Smart Assistance: Ask questions about libraries and get instant, accurate answers
- π Always Current: Documentation updates automatically, no more outdated tutorials
Access real-time documentation for any Python library:
# Ask: "How do I use polars window functions?"
# Get: Current polars docs, code examples, best practices
# Ask: "Show me plotly 3D scatter plot examples"
# Get: Latest plotly API with working code samplesSupported Libraries:
- Polars, Pandas, NumPy - Data manipulation
- Plotly, Altair, Matplotlib - Visualization
- Scikit-learn, PyTorch - Machine Learning
- And 1000+ more Python packages
Get expert help with marimo-specific patterns:
# Ask: "How do I create a reactive form in marimo?"
# Get: marimo form patterns, state management examples
# Ask: "Show me marimo UI element examples"
# Get: Complete UI component reference with codeExample 1: Learning New Libraries
# You're exploring polars window functions
# Type: "polars rolling mean example"
# MCP returns: Latest polars docs + working code
df.with_columns(
pl.col("sales").rolling_mean(window_size=7).alias("7d_avg")
)Example 2: Debugging
# Stuck on a plotly error?
# Ask: "Why is my plotly 3D scatter not showing?"
# Get: Common issues, solutions, and corrected codeExample 3: Best Practices
# Want to optimize code?
# Ask: "Best way to aggregate in polars?"
# Get: Performance tips, lazy evaluation patterns- Code Completion: Context-aware suggestions as you type (Ollama local LLM)
- Inline Documentation: Hover over functions for instant docs
- Smart Refactoring: AI suggests improvements based on current libraries
- Interactive Q&A: Chat with AI about your code using latest docs
MCP servers are pre-configured in .marimo.toml:
[mcp]
presets = ["context7", "marimo"]
[ai.ollama]
model = "gpt-oss:20b-cloud"
base_url = "http://localhost:11434/v1"If you're running inside Docker, the same mcp block lives in docker/.marimo.toml, so both local and containerized sessions pick up identical presets.
You can extend functionality by adding custom MCP servers in .marimo.toml:
[mcp.mcpServers.your-custom-server]
command = "npx"
args = ["-y", "@your-org/your-mcp-server"]Expose MLflow trace operations to MCP-aware IDEs/assistants (e.g., Claude Desktop, Cursor) by running:
mlflow mcp runRun the command from an environment where MLFLOW_TRACKING_URI (or MLFLOW_BACKEND_STORE_URI/MLFLOW_DEFAULT_ARTIFACT_ROOT) points at your experiments. The server stays up until interrupted and can be proxied alongside Marimo/MLflow so every tool shares the same MCP context.
Learn More:
- Marimo MCP Guide - Official MCP documentation
- Model Context Protocol - MCP specification and resources
Marimo Flow includes full Claude Code support with domain-specific skills, MCP servers, and automated hooks.
| Server | Purpose | Config |
|---|---|---|
| marimo | Notebook inspection, debugging, linting | HTTP on port 2718 |
| mlflow | Trace search, feedback, evaluation | stdio via mlflow mcp run |
| context7 | Live library documentation | stdio via npx |
| serena | Semantic code search | stdio via uvx |
Start marimo MCP server:
# Install once (recommended)
uv tool install "marimo[lsp,recommended,sql,mcp]>=0.18.0"
# Start server
marimo edit --mcp --no-token --port 2718 --headlessThree specialized skills in .claude/Skills/ provide expert guidance:
| Skill | Triggers | MCP Tools |
|---|---|---|
| marimo | marimo, reactive notebook, mo.ui |
Notebook inspection, linting, context7 docs |
| mlflow | mlflow, experiment tracking, genai tracing |
Trace search, feedback, evaluation, context7 docs |
| pina | pina, pinns, pde solver, neural operator |
MLflow tracking, context7 docs |
Pre-resolved context7 library IDs (no lookup needed):
/marimo-team/marimo- marimo docs (2,413 snippets)/mlflow/mlflow- mlflow docs (9,559 snippets)/mathlab/pina- PINA docs (2,345 snippets)
Cross-platform hooks in .claude/settings.json:
| Hook | Trigger | Action |
|---|---|---|
| SessionStart | Session begins | Start marimo MCP server |
| PostToolUse | Edit/Write .py files |
Auto-format with ruff |
| PreToolUse | Edit uv.lock |
Block (protection) |
MCP config for VS Code Copilot in .vscode/mcp.json:
{
"servers": {
"marimo": { "type": "http", "url": "http://127.0.0.1:2718/mcp/server" },
"mlflow": { "type": "stdio", "command": "mlflow", "args": ["mcp", "run"] }
}
}Docker setup (configured in docker/docker-compose.yaml):
MLFLOW_BACKEND_STORE_URI:sqlite:////app/data/experiments/db/mlflow.dbMLFLOW_DEFAULT_ARTIFACT_ROOT:/app/data/experiments/artifactsMLFLOW_HOST:0.0.0.0(allows external access)MLFLOW_PORT:5000OLLAMA_BASE_URL:http://host.docker.internal:11434(requires Ollama on host)
Local development:
MLFLOW_TRACKING_URI:http://localhost:5000(default)
The Docker container runs both services via docker/start.sh:
- Marimo: Port 2718 - Interactive notebook environment
- MLflow: Port 5000 - Experiment tracking UI
GPU Support: Use docker-compose.cuda.yaml for NVIDIA GPUs or docker-compose.xpu.yaml for Intel GPUs. The default docker-compose.yaml is CPU-only.
- Experiments:
GET /api/2.0/mlflow/experiments/list - Runs:
GET /api/2.0/mlflow/runs/search - Models:
GET /api/2.0/mlflow/registered-models/list
- Notebooks:
GET /- File browser and editor - Apps:
GET /run/<notebook>- Run notebook as web app
Reactive multi-agent team that orchestrates PINA workflows via pydantic-graph,
backed by MLflow for tracing + persistence, exposed via marimo's chat UI and
optionally as A2A and AG-UI ASGI servers.
from marimo_flow.agents import lead_chat, FlowDeps
import marimo as mo
deps = FlowDeps(mlflow_tracking_uri="sqlite:///mlruns.db")
chat = mo.ui.chat(lead_chat(deps=deps))
chatRoles (each loads its .claude/Skills/<name>/SKILL.md as instructions=):
notebookβ marimo MCP cell ops (skills:marimo,marimo-pair)problemβ defines a PINA Problem from an open spec (skill:pina)modelβ designs a neural architecture for the problem (skill:pina)solverβ wires Solver + Trainer config (skill:pina)mlflowβ MLflow MCP tracking + registry (skill:mlflow)
A RouteNode classifier dispatches between sub-nodes; the lead agent wraps the
graph as a single tool so the same backend powers marimo chat, A2A, and AG-UI.
Models: Ollama Cloud at http://localhost:11434/v1 (:cloud-suffixed tags).
Defaults in marimo_flow.agents.deps.DEFAULT_MODELS.
Standalone servers:
uv run python -m marimo_flow.agents.server.a2a # A2A on :8000
uv run python -m marimo_flow.agents.server.ag_ui # AG-UI on :8001See examples/lab.py for the full demo notebook.
We welcome contributions! Please see our Contributing Guidelines for details on:
- Development setup and workflow
- Code standards and style guide
- Testing requirements
- Pull request process
Quick Start for Contributors:
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Make your changes following the coding standards
- Test your changes:
uv run pytest - Submit a pull request
See CONTRIBUTING.md for comprehensive guidelines.
See CHANGELOG.md for a detailed version history and release notes.
Current Version: 0.2.0
This project is licensed under the MIT License - see the LICENSE file for details.
Built with β€οΈ using Marimo and MLflow