Memo
Memo MCP is an MCP server that saves and retrieves AI conversation context. You can use it to hand off conversations between different AI agents like Claude Code, Gemini CLI, and Codex.
You can start a session on a laptop and continue it on a desktop machine. The agent saves structured context like goals, completed tasks, pending tasks, decisions, and relevant files when you issue a specific command. The system returns a short ID. You use this ID anywhere to restore the exact context.
Features
- Saves AI conversation context into Upstash Redis.
- Retrieves previous conversation states using a unique short ID.
- Stores structured snapshots containing goals, completed tasks, pending tasks, key decisions, and relevant file paths.
- Encrypts stored data at rest and in transit.
- Expires data automatically after 24 hours by default.
- Accepts custom expiration times via a command-line flag.
- Supports self hosting on custom infrastructure using Vercel.
Use Cases
- Transfer an active debugging session from Claude to Cursor to utilize a different AI model.
- Pause a complex refactoring task at the end of the workday and resume it the next morning.
- Migrate an ongoing development conversation from a local laptop to a remote desktop workstation.
- Share a specific project state with another developer by passing the generated context ID.
- Host the API and storage on private infrastructure to maintain strict data ownership.
Installation
Claude Code
claude mcp add memo -- npx -y @upstash/memoThe -y flag tells npx to accept the installation prompt automatically.
OpenCode
{
"mcp": {
"memo": {
"type": "local",
"command": ["npx", "-y", "@upstash/memo"]
}
}
}Claude Desktop and Cursor
{
"mcpServers": {
"memo": {
"command": "npx",
"args": ["-y", "@upstash/memo"]
}
}
}Basic Usage
Type memo set in your AI chat interface to save the current context. The AI summarizes the conversation and returns an ID string like 4tJ630XqhCV5gQelx98pu. The agent stores a structured snapshot containing the goal, completed tasks, pending tasks, key decisions, and relevant file paths. This keeps restored conversations focused. It avoids reloading raw chat history.
Type memo get <id> to restore a previous context. Replace <id> with your specific string. The AI loads the previous context and continues the conversation.
Self-Hosting Instructions
You can self host the API and storage on your own infrastructure. The repository includes both the MCP server and the API.
Create a .env file in the project root to set up your environment. Add your Upstash Redis credentials to this file:
UPSTASH_REDIS_REST_URL=your-redis-url
UPSTASH_REDIS_REST_TOKEN=your-redis-tokenRun this command to start the local development server:
npx vercel devRun this command to deploy the API to Vercel:
vercelYou must set the exact same environment variables in your Vercel project settings.
Modify the index.ts file to point the MCP server to your custom API. The default server uses https://memo-upstash.vercel.app. Update these variables in index.ts:
const GET_URL = "https://your-api.vercel.app/api/get";
const SET_URL = "https://your-api.vercel.app/api/set";Configuration Options
The server accepts specific arguments to modify its behavior. The project operates under the MIT License.
--ttl-mins: Sets the expiration time for stored context data in minutes. The default value is 1440. You can pass this argument in your configuration file. Example usage:"args": ["-y", "@upstash/memo", "--ttl-mins", "4320"].
FAQs
Q: How long are my saved contexts stored?
A: By default, contexts expire after 24 hours. You can change this with the --ttl-mins option when starting the server.
Q: Can I use my own Redis instance?
A: Yes, self‑hosting allows you to use your own Upstash Redis instance by setting the appropriate environment variables.
Q: What happens if I lose the ID?
A: The ID is the only way to retrieve a saved context. If you lose it, the context cannot be restored. Make sure to copy the ID after running memo set.
Q: Is my conversation data encrypted?
A: Yes, data stored in Upstash Redis is encrypted at rest and in transit. If you self‑host, encryption depends on your infrastructure.
Q: Can I save and restore context across different AI agents?
A: Yes, as long as both agents have the Memo MCP server installed. Save in one, restore in another.
Latest MCP Servers
Memo
Colab
Excalidraw
Featured MCP Servers
Excalidraw
Claude Context Mode
Context+
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



