IronClaw is a free, open-source personal AI assistant written in Rust that runs AI agents with full local data control.
It keeps your secrets encrypted in a local PostgreSQL database and never exposes credentials to the underlying language model.
Most OpenClaw-like AI assistants that can actually do things, browse the web, call APIs, manage files, and handle credentials by passing them directly into the LLM context. That’s a real problem if you’re feeding it API keys, OAuth tokens, or personal data.
IronClaw solves this at the architecture level: secrets are injected at the host boundary after the LLM has already decided what to do, so the model never sees them.
Features
- WASM Sandbox: Untrusted tools run in isolated WebAssembly containers with explicit capability-based permissions for HTTP access, secret access, and tool invocation.
- Credential Protection: Secrets are injected at the host boundary and are never passed to WASM tool code, with active leak detection that scans both outgoing requests and incoming responses.
- Prompt Injection Defense: External content passes through pattern detection, content sanitization, and policy enforcement with severity levels (Block, Warn, Review, Sanitize).
- Endpoint Allowlisting: HTTP requests from tools are restricted to explicitly approved hosts and URL paths.
- Multi-Channel Access: IronClaw accepts input via REPL, HTTP webhooks, WASM-compiled channels (Telegram, Slack), and a browser-based web gateway with SSE/WebSocket streaming.
- Routines Engine: Background tasks run on cron schedules, event triggers, or webhook handlers without requiring an active user session.
- Heartbeat System: The agent executes proactively in the background for monitoring and maintenance tasks.
- Parallel Jobs: Multiple requests run concurrently in isolated contexts managed by a priority-aware scheduler.
- Self-Repair: Stuck operations are detected automatically and recovered without manual intervention.
- Dynamic Tool Building: You describe a capability in plain language and IronClaw builds it as a WASM tool on the fly.
- MCP Protocol: IronClaw connects to Model Context Protocol servers to extend its capabilities.
- Plugin Architecture: New WASM tools and channels can be added without restarting the agent.
- Hybrid Memory Search: The workspace uses full-text and vector search combined via Reciprocal Rank Fusion for accurate context retrieval.
- Workspace Filesystem: Notes, logs, and context are stored in a flexible path-based structure inside your local PostgreSQL database.
- Identity Files: Personality and preference settings persist consistently across sessions.
- AES-256-GCM Encryption: All secrets are encrypted at rest using AES-256-GCM with no telemetry or external data sharing.
- Full Audit Log: Every tool execution is logged for review.
OpenClaw v.s. IconClaw
| Feature | OpenClaw | IronClaw |
|---|---|---|
| Language | TypeScript | Rust |
| Memory Safety | Runtime GC | Compile-time |
| Secret Handling | LLM sees secrets | Encrypted vault |
| Tool Isolation | Shared process | Per-tool Wasm |
| Prompt Injection | “Please don’t leak” | Architectural |
| Network Control | Unrestricted | Allowlist |
How to Use It
Table Of Contents
Prerequisites
Before installing, you’ll need Rust 1.85 or higher, PostgreSQL 15+ with the pgvector extension, and a NEAR AI account (the setup wizard handles authentication).
Installation
Windows (installer): Download the .msi directly from the releases page and run it.
Windows (PowerShell):
irm https://github.com/nearai/ironclaw/releases/latest/download/ironclaw-installer.ps1 | iexmacOS / Linux / WSL:
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/nearai/ironclaw/releases/latest/download/ironclaw-installer.sh | shHomebrew (macOS/Linux):
brew install ironclawBuild from source:
git clone https://github.com/nearai/ironclaw.git
cd ironclaw
cargo build --release
cargo testIf you’ve modified any channel sources (e.g., the Telegram WASM channel), rebuild them first:
./scripts/build-all.shDatabase Setup
createdb ironclaw
psql ironclaw -c "CREATE EXTENSION IF NOT EXISTS vector;"First-Time Configuration
The wizard walks you through the database connection, NEAR AI authentication via browser OAuth, and secrets encryption using your system keychain. Bootstrap variables like DATABASE_URL and LLM_BACKEND are written to ~/.ironclaw/.env.
ironclaw onboardConnecting an Alternative LLM Provider
Select “OpenAI-compatible” in the wizard, or set these environment variables directly:
LLM_BACKEND=openai_compatible
LLM_BASE_URL=https://openrouter.ai/api/v1
LLM_API_KEY=sk-or-...
LLM_MODEL=anthropic/claude-sonnet-4Compatible providers include OpenRouter (300+ models), Together AI, Fireworks AI, Ollama (local), vLLM, and LiteLLM.
Running the Agent
# Interactive REPL
cargo run
# With debug logging
RUST_LOG=ironclaw=debug cargo runCLI Command Reference
| Command | Purpose |
|---|---|
ironclaw onboard | Run the interactive setup wizard |
cargo run | Start the interactive REPL |
RUST_LOG=ironclaw=debug cargo run | Start REPL with debug output |
cargo build --release | Build the release binary |
cargo test | Run the test suite |
cargo test <test_name> | Run a specific test |
cargo fmt | Format source code |
cargo clippy --all --benches --tests --examples --all-features | Run the linter |
createdb ironclaw_test | Create the test database |
./scripts/build-all.sh | Rebuild all WASM channel bundles |
./channels-src/telegram/build.sh | Rebuild only the Telegram WASM channel |
Pros
- Memory Safety: The Rust codebase prevents runtime memory errors through compile-time checks.
- Data Privacy: The encrypted vault keeps all secrets completely hidden from the external LLM.
- Provider Independence: The configuration system supports any OpenAI-compatible API endpoint.
- High Concurrency: The scheduler manages multiple parallel jobs with isolated execution contexts.
Cons
- Strict Prerequisites: The installation requires a specific PostgreSQL version and the pgvector extension.
- Account Requirement: The setup wizard forces users to authenticate through a NEAR AI account initially.
- Compilation Time: The source build process takes significant time. The Rust compiler requires heavy processing.
Related Resources
- pgvector: The PostgreSQL extension that powers IronClaw’s vector search.
- OpenClaw: The TypeScript project that inspired IronClaw, useful for understanding the feature lineage.
- NEAR AI: The official hosted deployment environment for IronClaw with NEAR AI Inference.
- Model Context Protocol (MCP): The open protocol IronClaw uses for connecting external tool servers.
- OpenRouter: A proxy layer that gives IronClaw access to 300+ models via a single API key.
- Ollama: Run local LLMs that IronClaw can use as a drop-in backend for fully offline operation.
FAQs
Q: Does IronClaw require a NEAR AI account to function?
A: The onboarding wizard uses NEAR AI authentication by default, but IronClaw works with any OpenAI-compatible backend. You can set LLM_BACKEND=openai_compatible and point it at Ollama or any other provider. The NEAR AI account handles initial authentication; the LLM backend is a separate configuration.
Q: What does “secrets never touch the LLM” actually mean in practice?
A: When a WASM tool needs to make an authenticated HTTP request, it signals the host with a placeholder. The host process injects the actual credential into the request just before execution. The LLM’s reasoning and the credential value are never in the same context. IronClaw also scans both the outgoing request and the incoming response for any text that resembles stored secrets before the response reaches the LLM context.
Q: How does IronClaw compare to OpenClaw?
A: OpenClaw is the TypeScript predecessor that IronClaw reimplements in Rust. The functional gap is narrowing but the architectural differences are significant. IronClaw uses per-tool WASM sandboxes instead of a shared Docker process, PostgreSQL instead of SQLite, and compile-time memory safety instead of a garbage-collected runtime. The credential-handling model is also completely different: OpenClaw exposes secrets to the LLM context; IronClaw does not.
Q: Can IronClaw run entirely offline?
A: Yes. Point LLM_BASE_URL at a local Ollama instance, run PostgreSQL locally, and IronClaw has no external dependencies. The NEAR AI cloud deployment at ironclaw.com is an option, not a requirement.
Q: What is the WASM sandbox performance overhead?
A: The WASM sandbox adds per-call overhead compared to native tool execution, but IronClaw compensates with parallel job execution and a priority-aware scheduler.
Q: How do routines differ from a regular cron job?
A: A standard cron job runs a fixed script. IronClaw routines are AI-mediated: the agent receives the trigger, reasons about the current state (using its persistent memory and any tools it has access to), and takes action accordingly. You can also define event-driven routines and webhook handlers in the same framework.
Q: What license does IronClaw use?
A: IronClaw is dual-licensed under MIT and Apache 2.0. You can choose either license for your use case.









