Sora

The Sora MCP Server integrates with OpenAI’s latest Sora 2 API to help you generate and remix videos directly from an MCP-compatible client.

You can feed it text prompts to create new video clips, modify existing videos, and keep tabs on the generation progress without leaving your workflow.

Features

  • 🎥 Create Videos: Generate video clips directly from text descriptions using Sora 2.
  • 🔀 Remix Videos: Take an existing video and alter it with a new prompt to create variations.
  • 📊 Track Job Status: Check the real-time progress of your video generation tasks.
  • 💾 Auto-Download Videos: A handy tool automatically saves completed videos to your local machine.
  • 🔌 Dual Server Architecture: Includes separate implementations for local (stdio) and remote (http) connections, optimizing for security and accessibility.

How To Use It

1. Clone the repository and install the necessary dependencies:

git clone https://github.com/Doriandarko/sora-mcp
cd sora-mcp
npm install

2. Build the project.

npm run build

3. This project contains two different MCP server implementations for distinct situations:

  • stdio-server.ts: Designed for local clients like Claude Desktop. It communicates via standard input/output (stdio), which is fast, secure, and doesn’t require a network connection.
  • server.ts: Built for remote access from web-based tools or clients like VS Code and Cursor. It runs as an HTTP server on port 3000.

4. Connecting to your MCP clients

For Claude Desktop:
You need to create a configuration file to tell the Claude app how to run the server.

  1. Copy the example config file claude_desktop_config.example.json to ~/Library/Application Support/Claude/claude_desktop_config.json.
  2. Open the new file and update the args path to the absolute path of the sora-mcp/dist/stdio-server.js file on your machine.
  3. Add your OpenAI API key to the OPENAI_API_KEY field.
  4. Restart the Claude Desktop application. The Sora tools will now be available.

For HTTP Clients (VS Code, MCP Inspector, etc.):
Run the server in development mode.

npm run dev

Or run it in production mode.

npm run build
npm start

You can then connect your client to http://localhost:3000/mcp.

5. All available tools

  • create-video: Generates a video from a prompt. You can specify the model, duration, and resolution.
  • get-video-status: Checks the progress of a video generation job using its video_id.
  • list-videos: Retrieves a paginated list of all your video generation jobs.
  • download-video: Returns a curl command to download a completed video file manually.
  • save-video: Automatically downloads and saves a completed video to your computer. This is often more convenient than using download-video.
  • remix-video: Creates a new video based on an existing one with a new text prompt.
  • delete-video: Removes a video job and its associated files.

FAQs

Q: Do I need to manually download the video after it’s generated?
A: You can, using the download-video tool that gives you a curl command. However, a more direct workflow is to use get-video-status to confirm the video is complete, and then use the save-video tool to automatically download it to your specified directory.

Q: How do I update the server path in the Claude Desktop configuration if I move the project folder?
A: You must edit the claude_desktop_config.json file located in Claude’s Application Support directory. Update the absolute path in the args field to point to the new location of dist/stdio-server.js and restart Claude.

Q: What do I need to start using the Sora MCP Server?
A: You’ll need Node.js 18+, an OpenAI API key with Sora API access, and an MCP client like Claude Desktop or Cursor. The Sora API requires separate approval from OpenAI beyond a standard API account.

Q: How much does it cost to generate videos?
A: You pay standard OpenAI API rates for Sora usage based on video duration and resolution. The MCP server itself is open-source and free, with pricing determined by your OpenAI API consumption.

Latest MCP Servers

CVE

An MCP Server that connects Claude to 27 security tools for CVE triage, EPSS checks, KEV status, exploit lookup, and package scanning.

WebMCP

webmcp is an MCP server that connects MCP clients to web search, page fetching, and local LLM-based extraction. It’s ideal…

Google Meta Ads GA4

An MCP server that connects AI assistants to Google Ads, Meta Ads, and GA4 for reporting, edits, and cross-platform analysis.

View More MCP Servers >>

Featured MCP Servers

Notion

Notion's official MCP Server allows you to interact with Notion workspaces through the Notion API.

Claude Peers

An MCP server that enables Claude Code instances to discover each other and exchange messages instantly via a local broker daemon with SQLite persistence.

Excalidraw

Excalidraw's official MCP server that streams interactive hand-drawn diagrams to Claude, ChatGPT, and VS Code with smooth camera control and fullscreen editing.

More Featured MCP Servers >>

FAQs

Q: What exactly is the Model Context Protocol (MCP)?

A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.

Q: How is MCP different from OpenAI's function calling or plugins?

A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.

Q: Can I use MCP with frameworks like LangChain?

A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.

Q: Why was MCP created? What problem does it solve?

A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.

Q: Is MCP secure? What are the main risks?

A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.

Q: Who is behind MCP?

A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.

Get the latest & top AI tools sent directly to your email.

Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!