One command. Any README. Automatic installation + launch.
readme-runner (rdr) is a fast, secure CLI tool that automatically installs and runs any software project by intelligently analyzing its README and project files.
# Clone and run any project with one command
rdr https://github.com/user/awesome-project- README-first Intelligence — Analyzes README.md to understand how to build and run your project
- Smart Fallback — Uses project files (Dockerfile, package.json, go.mod, etc.) when README is unclear
- Security-first — Dry-run by default, sudo confirmation, command blocklist
- AI-Powered Plans — Uses Anthropic, OpenAI, Mistral, Ollama, or works fully offline with smart mock plans
- Docker Preferred — Automatically uses Docker/Compose when available for isolation
- Multi-Stack Support — Node.js, Python, Go, Rust, Docker, and mixed projects
- Prerequisite Checking — Verifies tools are installed before running
- Error Recovery — Retry, continue, or abort on failures
# Clone the repository
git clone https://github.com/sony-level/readme-runner.git
cd readme-runner
# Build the binary
make build
# Or install to $GOPATH/bin
go install .After building, binaries are available at:
./rdr— Main binary./readme-run— Alias
Add to your PATH for global access:
# Add to ~/.bashrc or ~/.zshrc
export PATH="$PATH:/path/to/readme-runner"# Analyze current directory (dry-run by default)
rdr .
# Analyze a GitHub repository
rdr https://github.com/expressjs/express
# Actually execute the plan
rdr . --dry-run=false
# Execute with auto-confirm (except sudo)
rdr . --dry-run=false --yesRun ID: rr-20260203-1542-abc
Input: https://github.com/user/project
Source type: github
[DRY-RUN MODE] No commands will be executed.
[1/7] Fetch / Workspace
→ Workspace ready at .rr-temp/rr-20260203-1542-abc
→ Fetched 142 files (1.2 MB)
[2/7] Scan
→ README found: README.md (4.2 KB)
→ Primary stack: node
→ Stack Detection: node (confidence: 0.85)
[3/7] Plan (AI)
→ README clarity score: 0.80
→ Using README as primary source
→ Using LLM provider: anthropic
→ Plan generated: node project with 2 steps
[4/7] Validate / Normalize
→ Plan is valid
→ Risk summary: Low=1, Medium=1, High=0, Critical=0
[5/7] Prerequisites
→ All 2 prerequisites available
→ node: v20.10.0
→ npm: 10.2.3
[6/7] Execute
╔══════════════════════════════════════════════════════════════╗
║ DRY-RUN MODE ║
║ No commands will be executed ║
╚══════════════════════════════════════════════════════════════╝
Project type: node
Working directory: .rr-temp/rr-20260203-1542-abc/repo
Prerequisites:
• node (>= 18) - Node.js runtime required
• npm - Package manager for dependencies
Steps to execute:
[1] install
Command: npm ci
Risk: medium
[2] run
Command: npm start
Risk: low
Exposed ports:
• 3000
[7/7] Post-run / Cleanup
→ Workspace will be cleaned up
To execute this plan, run again without --dry-run:
rdr https://github.com/user/project --dry-run=false
rdr [command] [path|url] [flags]| Command | Description |
|---|---|
run |
Run installation from README (default) |
help |
Help about any command |
completion |
Generate shell autocompletion |
| Flag | Default | Description |
|---|---|---|
--dry-run |
true |
Show plan without executing |
--yes, -y |
false |
Auto-accept prompts (except sudo) |
--verbose, -v |
false |
Enable verbose output |
--keep |
false |
Keep workspace after execution |
--allow-sudo |
false |
Allow sudo without confirmation |
| Flag | Default | Description |
|---|---|---|
--llm-provider |
auto | LLM provider: anthropic, openai, mistral, ollama, http, mock |
--llm-endpoint |
— | HTTP endpoint for custom LLM or Ollama |
--llm-model |
— | Model name for LLM provider |
--llm-token |
— | Auth token (or use provider-specific env vars) |
Provider auto-selection: If no provider is specified, the tool automatically selects the best available:
anthropicifANTHROPIC_API_KEYis setopenaiifOPENAI_API_KEYis setmistralifMISTRAL_API_KEYis setollamaif Ollama is running locallymock(offline mode) otherwise
- Dry-run by default — Nothing executes without explicit
--dry-run=false - Sudo requires consent — Even with
--yes, sudo commands prompt for approval - Command blocklist — Dangerous commands are blocked (see below)
- LLM output validation — AI-generated plans are validated against security policy
- Workspace isolation — All operations happen in
.rr-temp/<run-id>/
The following patterns are always blocked:
rm -rf / # Root deletion
rm -rf ~ # Home deletion
mkfs # Filesystem creation
dd if=/dev/zero # Disk wiping
shutdown/reboot # System control
chmod -R 777 / # Dangerous permissions
:(){:|:&};: # Fork bomb
When a command requires sudo, you'll see:
╔══════════════════════════════════════════════════════════════╗
║ SUDO REQUIRED ║
╚══════════════════════════════════════════════════════════════╝
Step: install-docker
Command: sudo apt install docker.io
This command requires elevated (sudo) privileges.
Choose an option:
1) Allow for this step only
2) Allow for all sudo steps in this run
3) Show manual instructions (skip this step)
4) Abort entire operation
Enter choice [1-4]:
| Level | Description | Examples |
|---|---|---|
low |
Safe, read-only operations | echo, cat, ls |
medium |
Modifies local files | npm install, pip install --user |
high |
System package managers | apt install, brew install |
critical |
Requires sudo or system changes | sudo ..., remote scripts |
┌─────────────────────────────────────────────────────────────────┐
│ readme-runner │
├─────────────────────────────────────────────────────────────────┤
│ │
│ [1] Input Path or GitHub URL │
│ ↓ │
│ [2] Fetch Clone repo / Copy local files │
│ ↓ │
│ [3] Scan Detect files + Parse README │
│ ↓ │
│ [4] Stack Detect Identify: docker/node/python/go/rust │
│ ↓ │
│ [5] Plan (AI) Generate RunPlan JSON via LLM │
│ ↓ │
│ [6] Validate Security policy + Risk assessment │
│ ↓ │
│ [7] Normalize Adapt commands for OS/lockfiles │
│ ↓ │
│ [8] Prerequisites Check required tools are installed │
│ ↓ │
│ [9] Execute Run steps (or display in dry-run) │
│ ↓ │
│ [10] Cleanup Remove workspace (unless --keep) │
│ │
└─────────────────────────────────────────────────────────────────┘
The tool calculates a clarity score (0.0-1.0) for the README:
| Factor | Points |
|---|---|
| Has "Installation" section | +1.0 |
| Has "Usage" section | +1.0 |
| Has "Quick Start" section | +0.5 |
| Has "Build" section | +0.5 |
| Has 3+ code blocks | +1.0 |
| Has 2+ shell commands | +1.0 |
- Score ≥ 0.6: README is primary source
- Score < 0.6: Project files are primary source
| Stack | Detection Files |
|---|---|
| Docker | Dockerfile, docker-compose.yml, compose.yaml |
| Node.js | package.json, package-lock.json, yarn.lock, pnpm-lock.yaml |
| Python | pyproject.toml, requirements.txt, Pipfile, setup.py |
| Go | go.mod, go.sum |
| Rust | Cargo.toml, Cargo.lock |
| Java | pom.xml, build.gradle |
Uses Claude API for high-quality plan generation:
# Set API key via environment variable (recommended)
export ANTHROPIC_API_KEY="sk-ant-xxxxxxxxxxxx"
# Or specify provider explicitly
rdr . --llm-provider anthropicUses OpenAI GPT models:
export OPENAI_API_KEY="sk-xxxxxxxxxxxx"
rdr . --llm-provider openaiUses Mistral AI models:
export MISTRAL_API_KEY="xxxxxxxxxxxx"
rdr . --llm-provider mistralUses local Ollama instance - no API key required:
# Make sure Ollama is running
ollama serve
# Use Ollama provider
rdr . --llm-provider ollama --llm-model llama3.2Connect to any OpenAI-compatible API:
rdr . --llm-provider http \
--llm-endpoint "http://localhost:8080/v1/chat/completions" \
--llm-token "your-token" \
--llm-model "custom-model"Works completely offline with smart stack-based plans:
rdr . --llm-provider mockThe mock provider generates context-aware plans based on detected project files (package.json, Dockerfile, go.mod, etc.) without requiring any network access.
Note: The
copilotprovider has been deprecated. GitHub Copilot API is not available for custom tools. If you were using--llm-provider copilot, please migrate to one of the supported providers above. The tool will automatically fall back to mock mode if copilot is specified.
The LLM generates plans in this format:
{
"version": "1",
"project_type": "node",
"prerequisites": [
{
"name": "node",
"reason": "Node.js runtime required",
"min_version": "18"
}
],
"steps": [
{
"id": "install",
"cmd": "npm ci",
"cwd": ".",
"risk": "medium",
"requires_sudo": false,
"timeout": 300,
"description": "Install dependencies"
},
{
"id": "run",
"cmd": "npm start",
"cwd": ".",
"risk": "low",
"requires_sudo": false
}
],
"env": {
"NODE_ENV": "production"
},
"ports": [3000],
"notes": [
"Application will be available at http://localhost:3000"
]
}| Field | Required | Description |
|---|---|---|
version |
yes | Schema version (always "1") |
project_type |
yes | docker, node, python, go, rust, mixed |
prerequisites |
yes | Required tools with reasons |
steps |
yes | Ordered execution steps |
env |
no | Environment variables |
ports |
no | Exposed ports |
notes |
no | Additional information |
| Variable | Description |
|---|---|
ANTHROPIC_API_KEY |
Anthropic API key (Claude models) |
OPENAI_API_KEY |
OpenAI API key |
MISTRAL_API_KEY |
Mistral AI API key |
OLLAMA_HOST |
Ollama host address (default: localhost:11434) |
RD_LLM_TOKEN |
Generic LLM token (fallback for any provider) |
RD_LLM_PROVIDER |
Default provider via environment |
RD_LLM_MODEL |
Default model via environment |
RD_LLM_ENDPOINT |
Default endpoint via environment |
You can also configure providers via a config file at ~/.config/readme-runner/config.yaml:
provider: anthropic
model: claude-sonnet-4-20250514
# token: sk-ant-... # Or use environment variablePrecedence: CLI flags > Environment variables > Config file > Defaults
.rr-temp/
└── rr-20260203-1542-abc/ # Run ID
├── repo/ # Cloned/copied project
├── plan/ # Generated plan files
└── logs/ # Execution logs
rdr https://github.com/expressjs/express --dry-run=false --yesrdr https://github.com/python-poetry/poetry --verboserdr . --dry-run=false
# Docker Compose projects auto-detect and use: docker compose up# Native Ollama support
rdr . --llm-provider ollama --llm-model codellama
# Or via HTTP provider
rdr . --llm-provider http \
--llm-endpoint "http://localhost:11434/api/chat" \
--llm-model "codellama"# Use mock provider for offline operation
rdr . --llm-provider mockrdr https://github.com/user/project --keep --verbose
# Workspace preserved at .rr-temp/rr-xxxxx/make build # Build binaries
make test # Run tests
make lint # Run linter (requires golangci-lint)
make clean # Clean build artifactsreadme-runner/
├── cmd/ # CLI commands
│ ├── root.go # Root command + flags
│ └── run.go # Main execution pipeline
├── internal/
│ ├── config/ # Configuration management
│ ├── workspace/ # Temp workspace handling
│ ├── fetcher/ # Git clone / local copy
│ ├── scanner/ # File detection + README parsing
│ ├── stacks/ # Stack detectors (docker/node/etc.)
│ ├── llm/ # LLM providers (anthropic/openai/mistral/ollama/http/mock)
│ ├── plan/ # Plan validation + normalization
│ ├── prereq/ # Prerequisite checking
│ ├── exec/ # Step execution
│ ├── security/ # Security policies + sudo guard
│ └── ui/ # Terminal output formatting
├── docs/ # Documentation
├── test/ # Test fixtures
└── scripts/ # Development scripts
# All tests
go test ./... -v
# Specific package
go test ./internal/llm/tests/... -v
# With coverage
go test ./... -coverContributions are welcome! Please see our contributing guidelines.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with Cobra for CLI
- Powered by GitHub Copilot for AI planning
Made with care by ソニーレベル