NotebookLM MCP
The NotebookLM MCP Server connects MCP-compatible AI agents, like Claude Code or Cursor, directly to Google’s NotebookLM. It forces the AI to get answers from a knowledge base that you create and control.
You can use this MCP server to stop your AI from hallucinating. Instead of inventing plausible-sounding but non-existent functions when it doesn’t know an answer, the AI agent queries your private NotebookLM notebooks.
This means the code it writes is based on ground truth, not a guess. It’s useful for working with internal tools, new libraries, or any documentation that the AI’s training data wouldn’t include.
Key Features
- 🔍 Zero Hallucination Guarantee – NotebookLM refuses to answer questions not covered in your uploaded documents.
- 🤖 Autonomous Research – AI agents automatically ask follow-up questions to build comprehensive understanding.
- 📚 Smart Library Management – Save and tag multiple notebooks for automatic context selection.
- 🔄 Cross-Tool Compatibility – Works with Claude Code, Cursor, Codex, and other MCP clients.
- ⚡ Minimal Token Usage – Avoids expensive repeated document reading by leveraging pre-processed knowledge.
- 🔒 Local Authentication – Chrome automation runs locally with your credentials staying on your machine.
Use Cases
- Building with New APIs – Upload fresh documentation for libraries with rapidly changing APIs and get current, accurate implementation details without outdated examples or hallucinated functions.
- Internal Codebase Navigation – Create notebooks from your company’s internal documentation and codebase explanations, allowing agents to understand proprietary systems and conventions.
- Multi-Source Research Projects – Combine documentation from PDFs, websites, GitHub repositories, and YouTube tutorials into single notebooks for comprehensive project research.
- Legacy System Maintenance – Document older systems with sparse documentation and enable AI assistants to work effectively with outdated but critical codebases.
How To Use It
1. Install the MCP server for your AI agent.
For Claude Code:
claude mcp add notebooklm npx notebooklm-mcp@latestFor Codex:
codex mcp add notebooklm -- npx notebooklm-mcp@latestFor Cursor:
You’ll need to add the configuration to your ~/.cursor/mcp.json file.
{
"mcpServers": {
"notebooklm": {
"command": "npx",
"args": ["-y", "notebooklm-mcp@latest"]
}
}
}For Gemini
gemini mcp add notebooklm npx notebooklm-mcp@latestFor amp
amp mcp add notebooklm -- npx notebooklm-mcp@latestFor VS Code
code --add-mcp '{"name":"notebooklm","command":"npx","args":["notebooklm-mcp@latest"]}'2. After the server is added to your MCP client, you have to authenticate it with your Google account. This is a one-time step. Just type this into your agent’s chat:
"Log me in to NotebookLM"A Chrome window will pop up for you to log in.
3. With authentication done, you can build your knowledge base. Go to the NotebookLM website, create a new notebook, and upload your source material like PDFs, markdown files, websites, or even YouTube videos. Once your sources are added, click the share button and copy the “Anyone with link” URL.
4. Now, you can direct your AI agent to use it. Tell your agent something like:
"I'm building with [library]. Here's my NotebookLM: [link]"From that point on, the agent will use that notebook as its source of truth.
5. Available commands
- Authenticate: Use “Open NotebookLM auth setup” or “Log me in to NotebookLM” to open a Chrome window for Google login.
- Add Notebook: Say “Add [link] to library” to save a new notebook with its metadata.
- List Notebooks: Use “Show our notebooks” to see a list of all your saved notebooks.
- Research First: Tell the agent “Research this in NotebookLM before coding” to initiate a multi-question research session.
- Select Notebook: Say “Use the React notebook” to set the active notebook for the current conversation.
- Update Notebook: Use “Update notebook tags” to modify the metadata of a saved notebook.
- Remove Notebook: Say “Remove [notebook] from library” to delete a notebook from your local library.
- View Browser: Use “Show me the browser” to open a Chrome window and watch the live conversation between your agent and NotebookLM.
- Fix Authentication: If you have login issues, use “Repair NotebookLM authentication” to clear credentials and re-authenticate.
- Switch Account: Say “Re-authenticate with different Google account” to log out and log in with a new Google account.
- Clean Restart: Use “Run NotebookLM cleanup” to remove all server data for a fresh start.
- Keep Library on Cleanup: Add “Cleanup but keep my library” to preserve your saved notebooks during the cleanup process.
- Delete All Data: The command “Delete all NotebookLM data” performs a complete removal of all server-related information.
FAQs
Q: Is it really zero hallucinations?
A: Yes. NotebookLM is built to answer questions based only on the documents you upload. If the information isn’t present in the source material, it will state that it cannot answer the question.
Q: How secure is this process?
A: The automation and all interactions happen locally on your machine. Your Google credentials are not transmitted outside of your local Chrome instance. For extra security, you could use a dedicated Google account just for this purpose.
Q: Can I see what the agent is doing?
A: You can. Just send the command "Show me the browser". This will open the Chrome window so you can watch the conversation between your CLI agent and NotebookLM in real time.
Q: How does this differ from local RAG setups?
A: NotebookLM uses Gemini’s processing to create semantic understanding across all documents instantly, without vector database setup, embedding configuration, or chunking strategy tuning.
Q: Can I use multiple notebooks for different projects?
A: Yes, the library system supports saving multiple notebooks with tags. Your AI agent automatically selects the relevant notebook based on your current task context.
Q: What happens when I hit NotebookLM’s rate limits?
A: The server supports quick account switching, allowing you to continue research by authenticating with an alternative Google account when limits are reached.
Latest MCP Servers
Notion
Log Mcp
Apple
Featured MCP Servers
Notion
Claude Peers
Excalidraw
FAQs
Q: What exactly is the Model Context Protocol (MCP)?
A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.
Q: How is MCP different from OpenAI's function calling or plugins?
A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.
Q: Can I use MCP with frameworks like LangChain?
A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.
Q: Why was MCP created? What problem does it solve?
A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.
Q: Is MCP secure? What are the main risks?
A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.
Q: Who is behind MCP?
A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.



