rm -rf / Generate shell commands that work the first time—with pre-execution validation that catches what AI gets wrong. 100% local. Privacy-first. Your commands never leave your machine.
--dangerously-skip-permissions didn't help. AI hallucinations are inevitable.
Natural language in. Safe, working commands out.
Caro stops what permission flags can't
Caro warns you before you wipe critical logs
Platform-aware commands that work the first time
rm -rf / rm -rf ~ :(){ :|:& };: dd if=/dev/zero of=/dev/sda chmod -R 777 / mkfs.ext4 /dev/sda1 52+ dangerous patterns detected and blocked automatically
--dangerously-skip-permissions flag didn't help
— HN, Dec 2025
AI confidently made up paths that didn't exist
— HN, Jul 2025
These aren't edge cases. LLMs are probabilistic systems—failures are inevitable at scale.
Defense in depth for AI-powered shell tools
Never run AI tools with sudo or as root. Create a dedicated user with minimal permissions.
useradd --no-create-home --shell /bin/false caro-agent Confine AI agents to specific directories. Protect /home, /etc, and system paths.
# Caro warns on operations outside working directory Run AI tools in containers with no access to important data or host filesystem.
docker run --rm -v $(pwd):/workspace caro-sandbox LLMs are probabilistic. Even with 99% accuracy, 1 in 100 commands could be dangerous.
# AI may fabricate paths: rm -rf /imaginary/but/destructive Each layer catches what the others miss. Caro is your last line of defense—not your only one.
Even at 99.9% AI accuracy, that's 100 potentially dangerous commands daily. One bad hallucination without Caro = catastrophe.
See the specific risks Caro catches for your workflow
rm -rf /var/log/* docker system prune -a --volumes -f chmod -R 777 /var/www systemctl restart * find / -size +100M -delete kubectl delete pods --all -n production rm -rf ~/backups/db* pkill -9 -f java git push --force origin main rm -rf /data/tmp/* DROP TABLE users_backup; kafka-consumer-groups --reset-offsets --to-earliest --all-topics --execute Real concerns from the community that shaped Caro
"Claude deleted my entire home directory. The --dangerously-skip-permissions flag didn't help. It just... ran rm -rf ~/"
"Gemini hallucinated file paths and confidently deleted files that I never asked it to touch. LLMs are stochastic—they WILL fail."
"1 in 10 times it fails, nearly always because the demo gods got involved. LLMs are probabilistic—unreliability is fundamental to the architecture."
"Never assume that flags are sufficient. Run AI tools as unprivileged users in sandboxed environments."
Caro doesn't just block dangerous commands—it explains why they're dangerous and suggests safer alternatives. You stay in control while learning from every interaction.
Specific experiences, not generic praise
"Caught a recursive delete pattern I would have missed at 2 AM during an incident. The warning was specific enough that I understood WHY it was dangerous."
"We use it for onboarding. New engineers learn shell safety while being productive. No more scary 'don't touch production' lectures—Caro teaches in context."
"Compliance asked if our AI tools send data externally. Showed them Caro's source code—100% local. Approved same day. That never happens."
Don't take our word for it—
Read the source code, verify the claimsBuilt for engineers who can't afford to get it wrong
Blocks rm -rf /, fork bombs, and 50+ other career-ending commands BEFORE you can run them. Your 2 AM self will thank you.
Privacy-first design. No cloud API calls. Run in air-gapped networks. Pass any compliance audit. Your commands never leave your machine.
Warns when running as root. Blocks sudo escalation patterns. Flags operations on /home, ~, /etc. Defense in depth for AI agents that can't be trusted with full access.
Generates commands that work on your Mac, your Linux server, and your coworker's BSD box. First time. Every time.
Sub-2s inference on Apple Silicon. No waiting for cloud APIs. No wondering if the server is down. Just answers.
LLMs are probabilistic—they will hallucinate commands. Caro doesn't care about the source. Pattern matching catches dangerous commands whether they came from you or a confused AI.
AI can't be held accountable—but you can. Caro bridges the decision responsibility gap: explicit warnings give you the information to make informed decisions, not dice rolls.
See exactly what Caro blocks and why
View safety patterns →The differences that matter
Your production commands, server names, and file paths never leave your machine. Ever.
52+ safety patterns including rm -rf, fork bombs, and disk wipes. Pre-execution, not post-mortem.
Flags like --dangerously-skip-permissions still let AI delete your home directory. Caro's validation is deterministic.
Detects your OS, knows BSD vs GNU, and adjusts syntax automatically. No more Stack Overflow.
Real questions from skeptical engineers (we get it)
You're right—it happened with both Claude Code and Gemini CLI in 2025. The tools had safety flags but still ran destructive commands. Caro's safety is pattern-based, not permission-based. We block destructive patterns at the command level. Flags can be bypassed. Pattern matching can't.
Exactly—and that's why Caro doesn't trust the source. Whether a command comes from you, an AI, or a hallucinating LLM, Caro validates the command itself. If an AI hallucinates 'rm -rf /nonexistent/but/dangerous/path', the pattern is still blocked. Deterministic validation beats probabilistic generation.
Defense in depth: (1) Run as unprivileged user without sudo, (2) Sandbox to specific directories, (3) Use container isolation, (4) Let Caro validate commands. Each layer catches what others miss. See our Best Practices section for detailed setup.
Caro's safety patterns are baked into the binary—no network needed. When you update Caro (cargo install caro --force), you get the latest patterns. The core dangerous commands (rm -rf /, fork bombs, disk wipers) don't change. We also accept pattern contributions via GitHub.
No. Caro adds <100ms to command generation. The safety check is instant (pattern matching, not AI inference). In a real incident, that's 100ms that might save you from making things 10x worse. The validation is synchronous—you see the warning immediately.
Yes. Caro is designed for teams running hundreds of developer accounts. Each user gets local validation with no shared state. No cloud dependencies means no data leaks between accounts. Deploy via your package manager or container registry.
Caro detects your OS and shell at runtime. On macOS, it knows you're using BSD tools. On Linux, it adjusts for GNU syntax. It reads your $SHELL and adjusts accordingly. No configuration needed—it just works.
Caro warns, it doesn't jail. When you see a warning, you can still proceed—we just make sure you're doing it intentionally. For truly destructive commands (rm -rf /), you'll need to confirm. This is your seatbelt, not a straitjacket.
No. Caro runs 100% locally. Your commands, file paths, server names, and directory structures never leave your machine. The inference happens on your hardware. We collect minimal, anonymous usage metrics to improve the product—see our telemetry page for details. Check the source code—it's AGPL-3.0 licensed.
You shouldn't trust them blindly—that's the point. Caro generates commands AND validates them before you run them. It's not 'trust the AI'—it's 'trust the pattern-based safety layer that catches what the AI might get wrong.' The validation is deterministic, not probabilistic.
Yes—this is called LLM sycophancy. AI tools are trained to agree with users and appear confident, even when they're wrong. Gemini told a user they were 'overqualified and underpaid'—completely fabricated career advice. In shell commands, this means the AI will confidently generate commands that look right but are subtly destructive. Caro doesn't care about confidence. It validates the actual command.
Exactly the problem. The machine cannot be held responsible, but the decision is yours. That's the 'decision responsibility gap'—you're accountable for commands an AI suggested but can't fully verify. Caro bridges this gap: you make informed decisions with explicit warnings about dangerous patterns. No more dice-roll decision making.
Still skeptical? Good—you should be.
Read the source code →No account. No API key. No data collection. Just safer shell commands.
bash <(curl --proto '=https' --tlsv1.2 -sSfL https://setup.caro.sh) Then run:
caro "find files modified in the last 7 days" Prefer to build from source? See all installation options →