Your loyal companion for safe shell command generation. Published on crates.io with core features working - advanced features in active development.
Get started with Caro based on your workflow and goals
Get started with the core CLI tool - install via cargo and start generating safe commands
Early access to MLX-optimized inference for blazing-fast local command generation
Install Caro as a Claude Skill, or use the upcoming MCP server for automatic delegation
Connect to remote inference servers or configure custom backends
Real-world examples of safe, POSIX-compliant commands generated by Caro
find . -type f -size +1G -exec ls -lh {} \; โ Safe find . -name '*.png' -type f -mtime -7 โ Safe du -h --max-depth=1 | sort -hr โ Safe ps aux --sort=-%mem | head -n 11 โ Safe top -b -n 1 | head -n 20 โ Safe lsof -i :8080 โ Safe git log --since='1 week ago' --oneline โ Safe git for-each-ref --sort=-committerdate refs/heads/ --format='%(committerdate:short) %(refname:short)' โ Safe git log --follow -p -- <filename> โ Safe grep -i 'error' /var/log/app.log | tail -n 50 โ Safe awk '{print $9}' /var/log/nginx/access.log | sort | uniq -c | sort -rn โ Safe journalctl -p err -n 100 โ Safe ping -c 4 example.com โ Safe ss -tlnp โ Safe wget -c https://example.com/file.tar.gz โ Safe grep -Eo '\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b' file.txt | sort -u โ Safe wc $(find . -name '*.txt') โ Safe sed -i 's/old_text/new_text/g' *.txt โ Moderate Choose how you want to work with Caro - standalone, with Claude, or both
Use Caro directly from your terminal for instant command generation
# Generate a command
caro "find all Python files modified today"
# Review the command
find . -name "*.py" -type f -mtime 0
# Execute (after confirmation)
[y/N]? y Integrate Caro as an MCP server for seamless Claude Desktop integration
// claude_desktop_config.json
{
"mcpServers": {
"caro": {
"command": "caro",
"args": ["--mcp"],
"env": {
"CARO_BACKEND": "mlx"
}
}
}
} Use Caro as a specialized skill within Claude's broader workflow
# Install Caro as a Claude skill
/plugin install wildcard/caro
# Use the skill
You: /caro-shell-helper "find large log files and compress them"
Caro: I'll generate a safe command for that:
find /var/log -name "*.log" -size +100M \
-exec gzip {} \;
Risk Level: Moderate
- Modifies files (compression)
- Targets system logs
Proceed? [y/N] Caro's multi-layered safety validation protects you from dangerous commands
LLM generates POSIX-compliant command from natural language
Check against 52 dangerous patterns and destructive operations
Evaluate potential impact and assign risk level (Safe โ Critical)
Display command with risk level, require explicit confirmation
Execute command only after user approval, log for audit trail
Built-in pattern matching helps identify potentially destructive operations
rm -rf / mkfs.* :(){ :|:& };: chmod 777 / dd if=/dev/zero Every command is evaluated and color-coded based on potential impact
Read-only operations with no system impact
ls -lagrep pattern file.txtfind . -name '*.log' File modifications or network operations
sed -i 's/old/new/g' file.txtwget https://example.com/filecurl -X POST api.example.com System-level changes requiring caution
chmod +x script.shchown user:group filekill -9 <pid> Dangerous operations that are blocked by default
rm -rf /mkfs /dev/sdafork bombs Ensures commands work reliably across different systems and shells
Flexible inference options from ultra-fast local to scalable cloud deployments
Apple Silicon Native (In Development)
Work in progress: local inference optimized for Apple Silicon with Metal Performance Shaders
Local & Flexible
Connect to Ollama for local LLM inference with easy model management
High-Throughput Server
Connect to vLLM servers for production-grade inference with optimized performance
Built-in Fallback
CPU-based embedded backend works out of the box as automatic fallback
Just getting started? The CPU backend works out of the box - no configuration needed.
Want local models? Install Ollama for easy local inference with model flexibility.
Have a team server? Connect to vLLM for centralized, high-performance inference.
On Apple Silicon? MLX support is coming soon for ultra-fast local inference.
Get Caro running on your preferred platform with step-by-step instructions
macOS 12+ (Monterey, Ventura, Sonoma)
bash <(curl --proto '=https' --tlsv1.2 -sSfL https://setup.caro.sh) cargo install caro # Add alias to ~/.zshrc or ~/.bashrc: alias ai='caro' git clone https://github.com/wildcard/caro.git cd caro . "$HOME/.cargo/env" cargo build --release Ubuntu 20.04+, Debian 11+, Fedora 35+, Arch, RHEL 8+
bash <(curl --proto '=https' --tlsv1.2 -sSfL https://setup.caro.sh) cargo install caro alias ai='caro' # Add to ~/.bashrc git clone https://github.com/wildcard/caro.git cd caro cargo build --release Windows 10/11 (WSL2 recommended)
# In PowerShell (Admin): wsl --install # After restart, in WSL Ubuntu: bash <(curl --proto '=https' --tlsv1.2 -sSfL https://setup.caro.sh) cargo install caro alias ai='caro' FreeBSD 13+, OpenBSD 7+, NetBSD 9+
# Install Rust if needed pkg install rust cargo cargo install caro git clone https://github.com/wildcard/caro.git cd caro cargo build --release Once installed, Caro's configuration is consistent across all platforms:
caro config set backend [mlx|ollama|vllm|embedded] caro config set safety.level [strict|moderate|permissive] caro config show Caro is open source and built with the community. Contribute, report issues, or join the conversation.