DeepSeek TUI: Free AI Coding Agent for DeepSeek (Claude Code Alternative)

Yet another alternative to Claude Code. Edit files, run shell commands, and manage git with approval gates or full auto mode. Powered by DeepSeek.

DeepSeek TUI is a free, open-source AI coding agent built on the DeepSeek model family.

Designed as an open Claude Code alternative, the coding agent runs from the deepseek command, reads and edits local files, executes shell commands, searches the web, manages git, and coordinates sub-agents from a keyboard-driven interface in your terminal.

It’s great for developers who want the capabilities of Claude Code or Codex but built specifically around DeepSeek, with pricing that skews toward cheap Flash inference for routine tasks and Pro with extended thinking for harder problems. The auto mode handles that routing for you automatically.

See the latest DeepSeek API Pricing Here.

Features

  • Streams DeepSeek reasoning blocks in real time as the model works through each turn.
  • Auto mode sends a lightweight pre-turn routing call to select the model and thinking level before the main request runs.
  • 3 agent modes: Plan (read-only exploration), Agent (interactive with approval gates), and YOLO (auto-approves all tool calls in trusted workspaces).
  • Tool registry: file operations, shell execution, git, web search and browsing, apply-patch, sub-agents, and MCP servers.
  • Tracks per-turn and session-level token usage with cache hit/miss breakdown and live cost estimates.
  • Sessions checkpoint to disk and resume, fork at a chosen turn, or roll back file changes.
  • LSP integration surfaces inline errors and warnings from rust-analyzer, pyright, typescript-language-server, gopls, and clangd after each file edit.
  • Reasoning effort cycles through off - high - max with Shift+Tab.
  • Skills system loads composable instruction packs from the workspace or global directories.
  • Supports NVIDIA NIM, Fireworks, SGLang, vLLM, and Ollama as provider alternatives to the DeepSeek API.
  • HTTP/SSE runtime API exposes the agent for headless and CI-driven workflows.
  • User memory persists preferences across sessions in a flat file injected into the system prompt.
  • Durable task queue survives restarts for long-running background work.
  • Native RLM runs batched analysis through cheap Flash children in parallel.
deepseek-tui-claude-code-alternative-cli-demo

Use Cases

  • Review a large codebase in Plan mode before any shell or patch action runs.
  • Fix a bug from the terminal after attaching the relevant files through @path.
  • Run architecture, security, or release work through Auto mode so the agent selects the model and thinking level.
  • Connect the MCP servers to extend the terminal agent with extra local tools.
  • Expose the DeepSeek engine to a local app through the HTTP and SSE runtime API.

How to Use It

Installation

Pick the one that fits your existing toolchain:

# npm — easiest if Node is already installed; downloads prebuilt Rust binaries from GitHub Releases
npm install -g deepseek-tui
# Cargo — no Node required
cargo install deepseek-tui-cli --locked
cargo install deepseek-tui     --locked
# Homebrew (macOS)
brew tap Hmbown/deepseek-tui
brew install deepseek-tui
# Direct binary download
# Prebuilt for Linux x64/ARM64, macOS x64/ARM64, Windows x64
# https://github.com/Hmbown/DeepSeek-TUI/releases

The npm package is an installer wrapper that downloads the matching Rust binaries and places them on your PATH. Users in mainland China can point npm at https://registry.npmmirror.com or configure a Cargo registry mirror for faster downloads:

# ~/.cargo/config.toml
[source.crates-io]
replace-with = "tuna"
[source.tuna]
registry = "sparse+https://mirrors.tuna.tsinghua.edu.cn/crates.io-index/"

Linux ARM64 support (Raspberry Pi, Asahi, Graviton) is available from v0.8.8 onward via npm or the GitHub Releases page. For non-standard targets (musl, riscv64, FreeBSD), build from source using Rust 1.88+.

Authentication

Run the auth command on first setup:

deepseek auth set --provider deepseek

The key saves to ~/.deepseek/config.toml and applies from any directory. The DEEPSEEK_API_KEY environment variable also works, but the saved config key takes precedence and is easier to rotate. Check which credential source is active:

deepseek auth status

Remove a saved key:

deepseek auth clear --provider deepseek

Verify the full setup and API connectivity:

deepseek doctor
deepseek doctor --json    # machine-readable output for CI

Starting a Session

deepseek                                         # interactive TUI
deepseek "explain this function"                 # one-shot prompt
deepseek --model deepseek-v4-flash "summarize"   # explicit model
deepseek --model auto "fix this bug"             # auto-select model + thinking
deepseek --yolo                                  # auto-approve all tools

Auto Mode

--model auto sends a small pre-turn call to deepseek-v4-flash before the main request. The router reads the current prompt and recent context, then selects a concrete model (deepseek-v4-flash or deepseek-v4-pro) and thinking level (off, high, or max).

Simple or conversational turns stay on Flash with reasoning off. Coding, debugging, architecture, or ambiguous multi-step work can route to Pro with higher thinking.

The TUI shows the selected route and cost tracking charges against the model that actually ran. Use a fixed model when you need repeatable benchmarking or a strict cost ceiling.

Agent Modes

ModeBehavior
PlanRead-only exploration; agent outlines a plan before touching any files
AgentDefault interactive mode with approval gates on each tool call
YOLOAuto-approves all tool calls in a trusted workspace

Cycle modes with Tab or switch at any time with /model.

Reasoning Effort

Press Shift+Tab to cycle: off → high → max. Off skips extended reasoning entirely. High and max are appropriate for debugging, security review, or tasks where the model needs to reason through ambiguous multi-step problems.

Session Management

deepseek sessions                # list saved sessions
deepseek resume --last           # resume the most recent session
deepseek resume <SESSION_ID>     # resume a specific session by UUID
deepseek fork <SESSION_ID>       # fork a session at a chosen turn

The /restore command and revert_turn roll back workspace file changes using side-git snapshots stored under ~/.deepseek/snapshots/. These snapshots never touch your project’s own .git directory.

Full Command Reference

CommandPurpose
deepseekLaunch the interactive TUI
deepseek "prompt"One-shot prompt
deepseek --model autoAuto-select model and thinking level
deepseek --yoloLaunch in YOLO mode
deepseek auth set --provider deepseekSave API key
deepseek auth statusShow active credential source
deepseek auth clear --provider deepseekRemove saved key
deepseek doctorVerify setup and API connectivity
deepseek doctor --jsonMachine-readable diagnostics
deepseek setup --statusRead-only setup status
deepseek setup --tools --pluginsScaffold tool and plugin directories
deepseek modelsList available API models
deepseek sessionsList saved sessions
deepseek resume --lastResume the most recent session
deepseek resume <SESSION_ID>Resume a specific session
deepseek fork <SESSION_ID>Fork a session at a chosen turn
deepseek serve --httpStart HTTP/SSE API server
deepseek serve --acpACP stdio adapter for Zed and custom agents
deepseek pr <N>Fetch a pull request and pre-seed review prompt
deepseek mcp listList configured MCP servers
deepseek mcp validateValidate MCP config and connectivity
deepseek mcp-serverRun the dispatcher MCP stdio server
deepseek updateCheck for and apply binary updates

Keyboard Shortcuts

KeyAction
TabComplete / or @ entries; cycle mode when idle
Shift+TabCycle reasoning effort: off → high → max
F1Searchable help overlay
EscBack / dismiss
Ctrl+KCommand palette
Ctrl+RResume an earlier session
Alt+RSearch prompt history and recover cleared drafts
Ctrl+SStash current draft
@pathAttach file or directory context in the composer

Provider Configuration

# NVIDIA NIM
deepseek auth set --provider nvidia-nim --api-key "YOUR_NVIDIA_API_KEY"
deepseek --provider nvidia-nim
# Fireworks
deepseek auth set --provider fireworks --api-key "YOUR_FIREWORKS_API_KEY"
deepseek --provider fireworks --model deepseek-v4-pro
# Self-hosted SGLang
SGLANG_BASE_URL="http://localhost:30000/v1" deepseek --provider sglang --model deepseek-v4-flash
# Self-hosted vLLM
VLLM_BASE_URL="http://localhost:8000/v1" deepseek --provider vllm --model deepseek-v4-flash
# Ollama
ollama pull deepseek-coder:1.3b
deepseek --provider ollama --model deepseek-coder:1.3b

Key Environment Variables

VariablePurpose
DEEPSEEK_API_KEYAPI key
DEEPSEEK_BASE_URLAPI base URL override
DEEPSEEK_MODELDefault model override
DEEPSEEK_PROVIDERProvider selection (deepseek, nvidia-nim, fireworks, sglang, vllm, ollama)
DEEPSEEK_PROFILEConfig profile name
DEEPSEEK_MEMORYSet to on to enable user memory
DEEPSEEK_MAX_SUBAGENTSMax concurrent sub-agents (clamped to 1–20)
DEEPSEEK_SANDBOX_MODEread-only, workspace-write, danger-full-access, external-sandbox
SGLANG_BASE_URLSelf-hosted SGLang endpoint
VLLM_BASE_URLSelf-hosted vLLM endpoint
OLLAMA_BASE_URLSelf-hosted Ollama endpoint
NO_ANIMATIONS=1Force accessibility mode at startup
SSL_CERT_FILECustom CA bundle for corporate proxies

Config File

User config lives at ~/.deepseek/config.toml. A project-level overlay at <workspace>/.deepseek/config.toml can override non-sensitive settings. The overlay blocks api_key, base_url, provider, and mcp_config_path. Persistent UI preferences such as theme, auto_compact, show_thinking, and locale live in ~/.config/deepseek/settings.toml. Edit them from the TUI with /config or inspect the current state with /settings.

The same config file supports multiple provider profiles. Select a profile with --profile <name> or by setting DEEPSEEK_PROFILE.

Skills

Each skill is a directory with a SKILL.md file. DeepSeek TUI checks workspace directories (.agents/skills, ./skills, .opencode/skills, .claude/skills, .cursor/skills) and global directories (~/.agents/skills, ~/.claude/skills, ~/.deepseek/skills). Install a community skill from GitHub:

/skill install github:<owner>/<repo>

Manage skills with /skills (list), /skill <name> (activate), /skill new (scaffold), /skill update, /skill uninstall, and /skill trust.

MCP Integration

Configure the MCP servers in ~/.deepseek/mcp.json. List or validate connected servers at any time:

deepseek mcp list
deepseek mcp validate

Zed / ACP Integration

Add DeepSeek TUI as a custom agent server in Zed:

{
  "agent_servers": {
    "DeepSeek": {
      "type": "custom",
      "command": "deepseek",
      "args": ["serve", "--acp"],
      "env": {}
    }
  }
}

The first ACP slice supports new sessions and prompt responses. Tool-backed editing and checkpoint replay are not yet exposed through ACP.

Pros

  • Free and open source under the MIT license.
  • Runs on Linux, macOS, and Windows.
  • Four install options: npm, Cargo, Homebrew, and direct binary download.
  • ARM64 Linux support.
  • 1M-token context window on both models.
  • Real-time reasoning block streaming.
  • Auto mode selects model and thinking level per turn.
  • Per-turn cost tracking with cache hit/miss breakdown.
  • Session save, resume, and fork by turn.
  • Workspace rollback with side-git snapshots.
  • LSP diagnostics after every file edit.
  • MCP server support for extended tooling.
  • Works with NVIDIA NIM, Fireworks, SGLang, vLLM, and Ollama.
  • Skills system for composable, installable instruction packs.
  • HTTP/SSE runtime API for headless agent workflows.

Cons

  • Requires a DeepSeek API key (or a supported third-party provider key).
  • No web or GUI interface.
  • Terminal-only operation.
  • Prebuilt Windows binaries cover x64 only.

Related Resources

FAQs

Q: Is DeepSeek TUI free to use?
A: The TUI is free and open source under the MIT license. Running the agent costs money only on the API side. DeepSeek charges per token, with deepseek-v4-flash significantly cheaper than Pro. Prefix caching further reduces input costs for sessions with long stable context.

Q: Does DeepSeek TUI work on Windows?
A: Prebuilt binaries are available for Windows x64 via GitHub Releases, npm, and Scoop. The Scoop manifest updates independently and can lag behind the GitHub and npm releases. Download directly from GitHub Releases or use npm when you need the newest version.

Q: What is the difference between Plan, Agent, and YOLO modes?
A: Plan mode is read-only. The agent explores the workspace, reads relevant files, and proposes a plan before making any changes. Agent mode is the default: the agent runs multi-step tool use and requests approval at each tool call. YOLO mode auto-approves all tool calls in a trusted workspace and skips approval prompts entirely.

Q: What does auto mode actually do?
A: Auto mode sends a small pre-turn call to deepseek-v4-flash before the main request. The router examines the current prompt and recent context, then picks both the model (Flash or Pro) and the reasoning effort level (off, high, or max). Simple turns stay on Flash with no reasoning overhead. Coding tasks, debugging, architecture review, and ambiguous multi-step work can route to Pro with higher thinking. The TUI displays which route ran, and cost tracking charges against the actual model used.

Q: Can I use DeepSeek TUI with a self-hosted model?
A: Yes. DeepSeek TUI supports self-hosted endpoints through SGLang, vLLM, and Ollama. Set the base URL for your local inference server via the corresponding environment variable or config entry, then pass the provider flag at launch. SGLang and vLLM accept any OpenAI-compatible /v1 endpoint.

Q: How does workspace rollback work?
A: DeepSeek TUI creates side-git snapshots before and after each turn. These live under ~/.deepseek/snapshots/ and never interact with your project’s own .git directory. Run /restore or revert_turn to roll back file changes from a specific turn. Sessions can be forked at a chosen turn with deepseek fork <SESSION_ID>.

Leave a Reply

Your email address will not be published. Required fields are marked *

Get the latest & top AI tools sent directly to your email.

Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!