Multi-Model AI Coding Agent CLI – OpenClaude

Free Open Claude Code! An AI coding agent CLI for OpenAI, Gemini, GitHub Models, Codex, and Ollama.

OpenClaude is an open source coding agent CLI that runs in the terminal and connects one workflow to cloud APIs and local model backends.

The coding agent executes prompts, file operations, and multi-step agent tasks directly from the command line. You can use it to maintain a single coding environment while switching between providers like OpenAI, Gemini, DeepSeek, and local Ollama models.

It supports OpenAI compatible endpoints, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and several environment based enterprise providers. You run local models on your own hardware or connect to enterprise cloud APIs based on your specific project requirements.

Features

  • Executes bash commands, file edits, grep searches, and glob patterns directly in the terminal.
  • Streams real-time token output and tool progress during execution.
  • Processes multi-step tool loops with model calls and follow-up responses.
  • Accepts URL and base64 image inputs for vision-capable models.
  • Saves provider configurations in a local profile file for quick switching.
  • Directs specific agent tasks to different models based on user settings.
  • Searches the internet using DuckDuckGo or Firecrawl to retrieve external information.
  • Runs as a headless gRPC service for integration into CI/CD pipelines or custom interfaces.
  • Integrates with VS Code for launch control and theme support.

How to Use It

Getting Started

1. Install OpenClaude as a global CLI package.

npm install -g @gitlawb/openclaude

2. The package expects ripgrep to exist on the system path. Check that first if the CLI reports a missing dependency.

rg --version

3. Start the CLI with the default entry command.

openclaude

4. Use the built in provider flow after launch if you want guided setup and saved profiles.

/provider

5. Use the GitHub Models onboarding flow if that is your target backend.

/onboard-github

6. OpenAI setup uses OpenAI compatible environment variables plus a provider switch.

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o
openclaude

7. Windows PowerShell uses the same variables in PowerShell syntax.

$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_API_KEY="sk-your-key-here"
$env:OPENAI_MODEL="gpt-4o"
openclaude

8. Local Ollama setup points the same OpenAI compatible interface at the Ollama server.

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=qwen2.5-coder:7b
openclaude

9. PowerShell syntax for the same local setup looks like this.

$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_BASE_URL="http://localhost:11434/v1"
$env:OPENAI_MODEL="qwen2.5-coder:7b"
openclaude

Provider setup

ProviderSetup pathNotes
OpenAI compatible/provider or environment variablesWorks with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and other /v1 compatible servers.
Gemini/provider or environment variablesSupports API key, access token, or local ADC workflow on current main.
GitHub Models/onboard-githubUses an interactive onboarding flow with saved credentials.
Codex/providerReuses existing Codex credentials when present.
Ollama/provider or environment variablesRuns local inference and does not require an API key.
Atomic ChatAdvanced setupTargets local Apple Silicon use.
Bedrock, Vertex, FoundryEnvironment variablesFits supported enterprise environments.

Available Commands

CommandWhat it does
openclaudeStarts the interactive CLI.
/providerOpens guided provider setup and saved profile management.
/onboard-githubStarts GitHub Models onboarding.
/modelSelects models, with local OpenAI compatible model discovery added in v0.1.8.
npm install -g @gitlawb/openclaudeInstalls the package globally.
npm install -g @gitlawb/openclaude@latestUpdates to the latest published package.
rg --versionVerifies that ripgrep is installed and visible in the current shell.
npm run dev:grpcStarts the headless gRPC server on the local machine.
npm run dev:grpc:cliStarts the test CLI client that talks to the gRPC server.
bun installInstalls project dependencies for source builds.
bun run buildBuilds the project from source.
node dist/cli.mjsStarts the built CLI from the output directory.
bun run devRuns the development workflow.
bun testRuns the full unit test suite.
bun run test:coverageGenerates unit coverage output.
open coverage/index.htmlOpens the visual coverage report.
bun run test:coverage:uiRebuilds the coverage UI from existing coverage data.
bun run test:providerRuns provider focused tests.
bun run test:provider-recommendationRuns provider recommendation tests.
bun run smokeRuns the smoke test flow.
bun run doctor:runtimeChecks runtime health.
bun run verify:privacyRuns privacy verification checks.

Environment variables

VariablePurpose
CLAUDE_CODE_USE_OPENAITurns on the OpenAI compatible provider path.
Selects the target model name for OpenAI-compatible backends.Supplies the API key for OpenAI compatible providers.
OPENAI_MODELPoints the CLI at a custom OpenAI-compatible endpoint such as a local Ollama server.
OPENAI_BASE_URLTurns on Firecrawl-powered search and fetch behavior.
FIRECRAWL_API_KEYTurns on Firecrawl powered search and fetch behavior.
GRPC_PORTSets the gRPC server port. The default is 50051.
GRPC_HOSTSets the bind address for the gRPC server. The default is localhost.

Agent routing example

OpenClaude can map different agent roles to different backends through ~/.claude/settings.json.

{
  "agentModels": {
    "deepseek-chat": {
      "base_url": "https://api.deepseek.com/v1",
      "api_key": "sk-your-key"
    },   "gpt-4o": {
      "base_url": "https://api.openai.com/v1",
      "api_key": "sk-your-key"
    }
  },
  "agentRouting": {
    "Explore": "deepseek-chat",
    "Plan": "gpt-4o",
    "general-purpose": "gpt-4o",
    "frontend-dev": "deepseek-chat",
    "default": "gpt-4o"
  }
}

Web tool behavior

WebSearch uses DuckDuckGo by default on non Anthropic models. WebFetch uses a basic HTTP plus HTML to markdown path unless Firecrawl is configured.

# Set a Firecrawl API key 
export FIRECRAWL_API_KEY=your-key-here

Headless server mode

The gRPC server listens on localhost:50051 by default and streams text chunks, tool calls, and permission requests over a bidirectional connection. The repository also includes a CLI client and a Protocol Buffers definition file for generating clients in other languages.

# Start the gRPC Server 
npm run dev:grpc

# Run the Test CLI Client
npm run dev:grpc:cli

Pros

  • Works with over 200 models through OpenAI-compatible endpoints plus native provider integrations.
  • Runs local inference for free via Ollama or Atomic Chat.
  • Routes different agents to different models to optimize cost and performance.
  • Provides a headless gRPC mode for programmatic integration into CI/CD and external applications.
  • Includes a VS Code extension for launch integration and theme support.

Cons

  • The application stores API keys in plaintext within the settings file.
  • Smaller local models struggle with long multi-step tool flows.
  • Non-Anthropic providers lack Anthropic-specific features.
  • The default DuckDuckGo search faces potential rate limits or blocks.

Related Resources

  • ripgrep: Install the required search tool.
  • Ollama: Run local models through an OpenAI-compatible endpoint.
  • Firecrawl: Add stronger search and fetch behavior for pages that need JavaScript rendering.
  • Best CLI AI Coding Agents: Discover more CLI AI coding agents.

Leave a Reply

Your email address will not be published. Required fields are marked *

Get the latest & top AI tools sent directly to your email.

Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!