v1.1.0-beta.1

Explore Caro

Your loyal companion for safe shell command generation. Published on crates.io with core features working - advanced features in active development.

Choose Your Path

Get started with Caro based on your workflow and goals

๐Ÿ‘จโ€๐Ÿ’ป

Try Caro Today

Get started with the core CLI tool - install via cargo and start generating safe commands

1 Install with: cargo install caro
2 Try: caro 'list all files larger than 1GB'
3 Review the generated command with safety validation
4 Execute or customize as needed
Installation Guide โ†’
๐ŸŽ

Apple Silicon Users

In Development

Early access to MLX-optimized inference for blazing-fast local command generation

1 Build from source with MLX support (in development)
2 Experience sub-2s inference on M1/M2/M3
3 Run completely offline with local models
4 Help us test and improve performance
MLX Details โ†’
๐Ÿค–

Claude Integration

Skill Available

Install Caro as a Claude Skill, or use the upcoming MCP server for automatic delegation

1 Install with: /plugin install wildcard/caro
2 Use /caro-shell-helper for command generation
3 MCP server integration (coming soon)
4 Automatic safety validation for Claude's workflows
Integration Methods โ†’
๐Ÿ”ง

Advanced Setups

Connect to remote inference servers or configure custom backends

1 Use Ollama for local multi-model support
2 Connect to vLLM servers for team deployments
3 Configure backends in ~/.config/caro/config.toml
4 Automatic fallback to embedded CPU backend
Backend Config โ†’

Use Cases

Real-world examples of safe, POSIX-compliant commands generated by Caro

๐Ÿ“

File Management

You: Find all files larger than 1GB
find . -type f -size +1G -exec ls -lh {} \; โœ“ Safe
You: List all PNG images modified in the last 7 days
find . -name '*.png' -type f -mtime -7 โœ“ Safe
You: Show disk usage of each directory sorted by size
du -h --max-depth=1 | sort -hr โœ“ Safe
๐Ÿ“Š

System Monitoring

You: Show top 10 memory-consuming processes
ps aux --sort=-%mem | head -n 11 โœ“ Safe
You: Monitor CPU usage in real-time
top -b -n 1 | head -n 20 โœ“ Safe
You: Check which process is using port 8080
lsof -i :8080 โœ“ Safe
๐Ÿ”€

Git & Version Control

You: Show commits from the last week
git log --since='1 week ago' --oneline โœ“ Safe
You: List all branches sorted by last commit date
git for-each-ref --sort=-committerdate refs/heads/ --format='%(committerdate:short) %(refname:short)' โœ“ Safe
You: Find who changed a specific file
git log --follow -p -- <filename> โœ“ Safe
๐Ÿ“

Log Analysis

You: Find all ERROR entries in application logs
grep -i 'error' /var/log/app.log | tail -n 50 โœ“ Safe
You: Count HTTP status codes in access log
awk '{print $9}' /var/log/nginx/access.log | sort | uniq -c | sort -rn โœ“ Safe
You: Show last 100 system errors
journalctl -p err -n 100 โœ“ Safe
๐ŸŒ

Network Operations

You: Test connection to a remote server
ping -c 4 example.com โœ“ Safe
You: Show all listening TCP ports
ss -tlnp โœ“ Safe
You: Download a file with resume support
wget -c https://example.com/file.tar.gz โœ“ Safe
โœ‚๏ธ

Text Processing

You: Extract unique email addresses from a file
grep -Eo '\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b' file.txt | sort -u โœ“ Safe
You: Count lines, words, and characters in all .txt files
wc $(find . -name '*.txt') โœ“ Safe
You: Replace all occurrences in multiple files
sed -i 's/old_text/new_text/g' *.txt โš  Moderate

Integration Methods

Choose how you want to work with Caro - standalone, with Claude, or both

โŒจ๏ธ

Standalone CLI

Available Now

Use Caro directly from your terminal for instant command generation

Features

  • Direct command-line interface
  • Interactive confirmation prompts
  • Customizable safety levels
  • Shell alias support

Basic Usage

# Generate a command
caro "find all Python files modified today"

# Review the command
find . -name "*.py" -type f -mtime 0

# Execute (after confirmation)
[y/N]? y

Setup

  1. Install Caro from source or download the binary
  2. Add to PATH or create shell alias
  3. Configure backend preferences (optional)
  4. Start using: caro "your natural language prompt"
๐Ÿ”Œ

Claude MCP Server

Coming Soon

Integrate Caro as an MCP server for seamless Claude Desktop integration

Features

  • Automatic delegation from Claude
  • Safety validation without interruption
  • Context-aware command generation
  • Shared conversation history

MCP Configuration

// claude_desktop_config.json
{
  "mcpServers": {
    "caro": {
      "command": "caro",
      "args": ["--mcp"],
      "env": {
        "CARO_BACKEND": "mlx"
      }
    }
  }
}

Setup

  1. Install Caro MCP package
  2. Configure Claude Desktop settings
  3. Enable Caro MCP server
  4. Claude automatically uses Caro for shell commands
๐ŸŽฏ

Dedicated Skill

Available Now

Use Caro as a specialized skill within Claude's broader workflow

Features

  • Explicit skill invocation
  • Specialized shell expertise
  • Multi-turn command refinement
  • Batch command generation

Skill Installation

# Install Caro as a Claude skill
/plugin install wildcard/caro

# Use the skill
You: /caro-shell-helper "find large log files and compress them"

Caro: I'll generate a safe command for that:

find /var/log -name "*.log" -size +100M \
  -exec gzip {} \;

Risk Level: Moderate
- Modifies files (compression)
- Targets system logs

Proceed? [y/N]

Setup

  1. Install with: /plugin install wildcard/caro
  2. Invoke with /caro-shell-helper command
  3. Provide natural language description
  4. Review and approve generated commands

Safety First

Caro's multi-layered safety validation protects you from dangerous commands

How Safety Validation Works

1

Command Generation

LLM generates POSIX-compliant command from natural language

โ†’
2

Pattern Matching

Check against 52 dangerous patterns and destructive operations

โ†’
3

Risk Assessment

Evaluate potential impact and assign risk level (Safe โ†’ Critical)

โ†’
4

User Confirmation

Display command with risk level, require explicit confirmation

โ†’
5

Execution

Execute command only after user approval, log for audit trail

๐Ÿ›ก๏ธ

50+ Dangerous Patterns Detected

Built-in pattern matching helps identify potentially destructive operations

Example Blocked Patterns

rm -rf /
Filesystem destruction critical
mkfs.*
Disk formatting critical
:(){ :|:& };:
Fork bomb critical
chmod 777 /
Privilege escalation high
dd if=/dev/zero
Disk overwrite critical
๐Ÿ“Š

Risk Level Assessment

Every command is evaluated and color-coded based on potential impact

Safe

Read-only operations with no system impact

ls -lagrep pattern file.txtfind . -name '*.log'

Moderate

File modifications or network operations

sed -i 's/old/new/g' file.txtwget https://example.com/filecurl -X POST api.example.com

High

System-level changes requiring caution

chmod +x script.shchown user:group filekill -9 <pid>

Critical

Dangerous operations that are blocked by default

rm -rf /mkfs /dev/sdafork bombs
โœ…

POSIX Compliance Validation

Ensures commands work reliably across different systems and shells

  • Standard utilities only (ls, find, grep, awk, sed)
  • Proper path quoting for spaces and special characters
  • No bash-specific features (ensures portability)
  • Command syntax validation before execution

Choose Your Backend

Flexible inference options from ultra-fast local to scalable cloud deployments

๐ŸŽ

MLX

Apple Silicon Native (In Development)

In Development

Work in progress: local inference optimized for Apple Silicon with Metal Performance Shaders

Key Features

  • Sub-2s inference on M1/M2/M3 chips
  • Unified memory architecture for efficiency
  • Complete offline capability
  • Privacy-focused (no data leaves your machine)
  • Metal GPU acceleration

Performance

Startup < 100ms
Inference < 2s
Memory ~2GB with 7B model

Quick Setup

  1. Install MLX framework: pip install mlx
  2. Download optimized model from Hugging Face
  3. Configure Caro: caro config set backend mlx
  4. Test: caro "list files"

Best For

macOS users with Apple SiliconPrivacy-conscious developersOffline workflowsLow-latency requirements
๐Ÿฆ™

Ollama

Local & Flexible

Available

Connect to Ollama for local LLM inference with easy model management

Key Features

  • Cross-platform (macOS, Linux, Windows)
  • Simple model management (ollama pull/list)
  • Multiple model support (Llama, Mistral, etc.)
  • GPU acceleration (CUDA, Metal)
  • Active community and ecosystem

Performance

Startup < 200ms
Inference 2-5s (hardware dependent)
Memory Varies by model

Quick Setup

  1. Install Ollama from ollama.ai
  2. Pull a model: ollama pull llama2
  3. Configure Caro: caro config set backend ollama
  4. Specify model: caro config set ollama.model llama2

Best For

Cross-platform developmentModel experimentationLocal-first workflowsGPU-accelerated inference
๐Ÿš€

vLLM

High-Throughput Server

Available

Connect to vLLM servers for production-grade inference with optimized performance

Key Features

  • PagedAttention for efficient memory
  • Continuous batching for throughput
  • Multi-GPU support
  • OpenAI-compatible API
  • Production-ready deployment

Performance

Startup < 150ms (client)
Inference < 5s (network dependent)
Memory Server-side (client is lightweight)

Quick Setup

  1. Deploy vLLM server (Docker/k8s recommended)
  2. Configure endpoint: caro config set vllm.url https://api.example.com
  3. Set API key: caro config set vllm.api_key YOUR_KEY
  4. Test connection: caro "echo hello"

Best For

Team deploymentsCentralized inferenceHigh-throughput scenariosCloud-based workflows
๐Ÿ“ฆ

CPU Backend

Built-in Fallback

Available

CPU-based embedded backend works out of the box as automatic fallback

Key Features

  • Zero external setup required
  • Cross-platform (macOS, Linux, Windows)
  • Automatic fallback mode
  • Simple, reliable baseline option
  • Works immediately after installation

Performance

Startup < 50ms
Inference 5-10s
Memory < 500MB

Quick Setup

  1. No setup needed - works out of the box
  2. Automatically used if no other backend configured
  3. Override with: caro config set backend embedded

Best For

First-time usersQuick testingEmergency fallbackMinimal setup environments

Which Backend Should I Choose?

Just getting started? The CPU backend works out of the box - no configuration needed.
Want local models? Install Ollama for easy local inference with model flexibility.
Have a team server? Connect to vLLM for centralized, high-performance inference.
On Apple Silicon? MLX support is coming soon for ultra-fast local inference.

Installation Guides

Get Caro running on your preferred platform with step-by-step instructions

๐ŸŽ

macOS

macOS 12+ (Monterey, Ventura, Sonoma)

Apple Silicon (M1/M2/M3)Intel x86_64

Installation

One-Line Install (Recommended)
bash <(curl --proto '=https' --tlsv1.2 -sSfL https://setup.caro.sh)
Using Cargo
cargo install caro
# Add alias to ~/.zshrc or ~/.bashrc:
alias ai='caro'
Build from Source
git clone https://github.com/wildcard/caro.git
cd caro
. "$HOME/.cargo/env"
cargo build --release

Post-Installation

  1. Verify: caro --version (or ai --version if alias set)
  2. Try it: caro "list files in current directory"
  3. Configure backends in ~/.config/caro/config.toml (optional)

Platform-Specific Tips

  • MLX backend coming soon for Apple Silicon performance
  • Use Ollama for local model flexibility
  • CPU backend works immediately with zero configuration
๐Ÿง

Linux

Ubuntu 20.04+, Debian 11+, Fedora 35+, Arch, RHEL 8+

x86_64ARM64

Installation

One-Line Install (Recommended)
bash <(curl --proto '=https' --tlsv1.2 -sSfL https://setup.caro.sh)
Using Cargo
cargo install caro
alias ai='caro' # Add to ~/.bashrc
Build from Source
git clone https://github.com/wildcard/caro.git
cd caro
cargo build --release

Post-Installation

  1. Verify: caro --version
  2. Try: caro "show disk usage"
  3. Optional: Install Ollama for local models

Platform-Specific Tips

  • CPU backend works out of the box
  • Connect to Ollama or vLLM for more features
  • Configure backends in ~/.config/caro/config.toml
๐ŸชŸ

Windows

Windows 10/11 (WSL2 recommended)

x86_64

Installation

WSL2 (Recommended)
# In PowerShell (Admin):
wsl --install
# After restart, in WSL Ubuntu:
bash <(curl --proto '=https' --tlsv1.2 -sSfL https://setup.caro.sh)
Using Cargo in WSL
cargo install caro
alias ai='caro'

Post-Installation

  1. Verify: caro --version
  2. Try: caro "list files"
  3. Works with POSIX commands in WSL environment

Platform-Specific Tips

  • WSL2 provides full POSIX compatibility
  • Native Windows support planned for future releases
  • Use Windows Terminal for best experience
๐Ÿ˜ˆ

BSD

FreeBSD 13+, OpenBSD 7+, NetBSD 9+

x86_64ARM64

Installation

Using Cargo
# Install Rust if needed
pkg install rust cargo
cargo install caro
Build from Source
git clone https://github.com/wildcard/caro.git
cd caro
cargo build --release

Post-Installation

  1. Verify: caro --version
  2. Try: caro "list open ports"
  3. POSIX compliance ensures BSD compatibility

Platform-Specific Tips

  • CPU backend works reliably on BSD systems
  • Connect to Ollama or vLLM as needed
  • Excellent POSIX command compatibility

Universal Configuration

Once installed, Caro's configuration is consistent across all platforms:

caro config set backend [mlx|ollama|vllm|embedded] caro config set safety.level [strict|moderate|permissive] caro config show

Join the Pack

Caro is open source and built with the community. Contribute, report issues, or join the conversation.