cli-bot is a Rust CLI that turns natural-language requests into shell commands using Ollama, with awareness of the user's OS, Linux distro, and package manager. Shell command generation remains the priority, with a text-response fallback only when the planner marks a request as unresolved.
Instead of remembering exact flags, command variants, and editor invocations, you can describe what you want in plain English and let cli-bot translate that intent into a shell command you can inspect, benchmark, approve, and run.
ollama pull lfm2:latest
cargo install cli-bot
cli-bot "Ping google five times"On macOS, you can also install with Homebrew:
brew install joelee/oss/cli-bot
cli-bot "Ping google five times"This assumes:
- Ollama is installed
- the Ollama service is running locally
cli-botcan create a default config file on first run if needed
See Install Ollama and Configuration for the full setup.
If you encounter any issues, run cli-bot --check to identify the issues.
Terminal users often know what they want to do, but not always the exact command shape.
Examples:
- You remember
findorducan do the job, but not the right flags. - You know there is a one-liner for archive extraction, file inspection, or network diagnostics, but you do not want to search the web again.
- You want a local LLM-assisted shell helper without sending requests to a hosted service.
cli-bot is built for that gap between intent and syntax.
- developers who live in the terminal and want a faster way to recall commands
- DevOps and platform engineers who need quick shell help without leaving the terminal
- Linux and macOS users who know the shell but do not memorize every flag combination
- newcomers who want a safer, more guided path into command-line usage
- teams that want a local, Ollama-backed CLI instead of a cloud dependency
The inspiration for cli-bot is simple: many shell tasks are easy once you know the exact command, but the friction of remembering the exact syntax breaks flow.
This project was created to give users a practical local assistant for terminal work:
- natural language in
- real shell commands out
- confirmation for risky operations
- choice when multiple commands are plausible
- benchmark data to compare local Ollama models in real usage
It is meant to feel less like a chatbot and more like a sharp command-line copilot.
- translates plain-English requests into shell commands with Ollama
- supports configurable local models through
cli-bot.toml - understands the user's OS and preferred package manager for package-related requests
- asks for approval before running potentially destructive commands
- lets the user pick between multiple command choices
- can optionally auto-select the LLM-recommended best command
- can fall back to a direct text response when a request cannot be resolved into a shell command safely
- supports benchmarking to compare models by latency
- supports verbose debugging to inspect full Ollama responses
- respects a preferred editor from config or
$EDITOR - includes colorized output, checks, tests, and local pre-commit hooks
cli-bot "Ping google five times"You can also run cli-bot with no request string and it will prompt you interactively.
Expected command:
ping -c 5 google.comcli-bot "spell mantainence"Expected response:
maintenance
cli-bot "Show the ten largest files in the current directory"Possible output plan could include commands using find, du, and sort.
On macOS with Homebrew:
cli-bot "Install btop"Expected command:
brew install btopOn Arch Linux with paru available:
cli-bot "List all my installed packages"Expected command could be:
paru -Qcli-bot "Extract archive.tar.gz here"Expected command:
tar -xzf archive.tar.gzcli-bot "Show me lines containing timeout in server.log"Expected command:
grep -n "timeout" server.logcli-bot "Show branches merged into main"Expected command could be:
git branch --merged mainIf preferred_editor = "nvim" or $EDITOR=nvim:
cli-bot "Edit my git config file"Expected command:
nvim ~/.gitconfigcli-bot "Delete the target directory"If the selected command is destructive, cli-bot asks for explicit approval before execution.
If a request cannot be safely resolved into a shell command, cli-bot can fall back to a direct text response instead of guessing an incorrect command.
cli-bot requires a running Ollama endpoint. Before using the CLI, install Ollama, start the local service, and pull the model you want to use.
See Install Ollama for a step-by-step setup guide.
This install path has been tested on macOS:
brew install joelee/oss/cli-botThen install the config into the default user location:
mkdir -p "$HOME/.config/cli-bot"
install -m 0644 cli-bot.toml "$HOME/.config/cli-bot/cli-bot.toml"If you skip that step, cli-bot will create a default config automatically at ~/.config/cli-bot/cli-bot.toml when no config file is found.
Before first use, make sure Ollama is running and the default model is available:
ollama pull lfm2:latest
cli-bot --check--check now also reports whether the current terminal environment supports interactive prompts and command selection.
After publishing, users can install with:
cargo install cli-botThis builds the executable from source on the user's machine.
If cargo is not installed yet, install the Rust toolchain first.
Common options:
- Debian/Ubuntu:
sudo apt install cargo
- Fedora:
sudo dnf install cargo
- Arch Linux:
sudo pacman -S rust
- macOS with Homebrew:
brew install rust
If you want the latest official Rust toolchain instead of a distro package, use rustup:
curl https://sh.rustup.rs -sSf | shgit clone https://github.com/joelee/cli-bot
cd cli-bot
cargo build --release
sudo install -m 0755 target/release/cli-bot /usr/local/bin/cli-botIf you want a shorter command, you can alias cli-bot to cb in your shell profile:
alias cb="cli-bot"Then you can run commands like:
cb "Ping google five times"At minimum, make sure:
- Ollama is installed
- The Ollama service is running locally
- The default model is pulled
Example:
ollama pull lfm2:latestDetailed instructions are in Install Ollama.
If no config file is provided and none is found in the normal lookup locations, cli-bot creates a default config automatically at:
$HOME/.config/cli-bot/cli-bot.toml
You can also configure the environment profile used for package-related requests, including preferred_package_manager.
Install the config into one of the default lookup locations.
User-local:
mkdir -p "$HOME/.config/cli-bot"
install -m 0644 cli-bot.toml "$HOME/.config/cli-bot/cli-bot.toml"System-wide:
sudo install -m 0644 cli-bot.toml /etc/cli-bot.tomlWhen --config is not provided, cli-bot looks for configuration in this order:
${HOME}/.config/cli-bot/cli-bot.toml/etc/cli-bot.toml
For local development from the repository root:
cargo run -- --config ./cli-bot.toml "Ping google five times"cli-bot "List all listening TCP ports"
cli-bot "Show disk usage for this folder"
cli-bot "Find all .rs files containing TODO"
cli-bot "Show the last 50 lines of app.log"
cli-bot "Create a gzipped tarball of the dist folder"You can also start with no arguments and type the request when prompted:
cli-botcli-bot does not blindly trust model output.
- commands can be marked
potentially_destructiveby the LLM - destructive substring rules in config provide an additional safety net
- risky commands require explicit user approval before execution
- if multiple commands are returned, the user can choose the right one
This keeps the tool useful without pretending shell execution is risk-free.
One of the practical uses of cli-bot is comparing local Ollama models for real CLI tasks.
cli-bot --config ./cli-bot.toml --model lfm2:latest --benchmark --dry-run "Ping google five times"The current default model selection is documented in the published benchmark report:
That report captures comparative results across multiple models and is the basis for choosing lfm2:latest as the default model.
The benchmark report includes:
- the model used
- planner latency in milliseconds
- execution latency in milliseconds, or
skippedfor dry runs - total end-to-end runtime
This makes it easy to compare models for speed and response quality on realistic terminal prompts.
Some models follow structured output instructions better than others.
Use verbose mode when you need to inspect the full Ollama exchange:
cli-bot --config ./cli-bot.toml --model lfm2.5-thinking:latest --verbose --benchmark "Ping google five times"Verbose mode shows:
- resolved config and model
- preferred editor
- full Ollama request body
- full raw Ollama response body
- extracted planner JSON before deserialization
This is especially useful when a model adds extra commentary, malformed JSON, or the wrong schema.
Use --check to verify the local environment:
cli-bot --config ./cli-bot.toml --checkThis validates:
- the config file can be found and parsed
- the Ollama service is reachable
- the configured model exists in Ollama
- the preferred editor resolves and is available
By default, cli-bot shows an interactive selector when multiple commands are returned.
If you want the LLM to choose the best option automatically:
- set
ui.auto_select_recommended = trueincli-bot.toml - or pass
--auto-select-best
cli-bot supports ANSI color output with:
cli-bot --color auto "Ping google five times"Available values:
autoalwaysnever
If NO_COLOR is set, color output is disabled.
--config <path>: use a specific config file--model <name>: override the configured Ollama model for this invocation--auto-select-best: automatically use the LLM-recommended command when multiple choices are returned--dry-run: show the selected command without executing it--print-plan: print the structured planner response--benchmark: print planning, execution, and total elapsed time in milliseconds--models-benchmark: run configured model/query benchmarks and print a Markdown report--check: verify config, Ollama connectivity, model availability, and editor resolution--color <auto|always|never>: control ANSI color output--quiet: hide cli-bot informational output and only show the selected command output--verbose: print detailed actions and full Ollama responses for debugging
- Changelog
- Installation
- Install Ollama
- Homebrew
- crates.io Release
- Publishing
- Architecture
- Configuration
- Usage
- Testing
- Roadmap
If you use the pre-commit framework, install the repo hooks with:
pre-commit installThe repository includes .pre-commit-config.yaml, which runs ./scripts/verify.sh before commits.