Terraform

The Terraform MCP Server enables your AI assistant to integrate seamlessly with Terraform Registry APIs. It enhances Infrastructure as Code (IaC) development by providing advanced automation and interaction capabilities.

Features List

  • 🔍 Automated provider and module discovery
  • 📊 Data extraction and analysis from Terraform Registry
  • 🛠️ Detailed information retrieval for provider resources and data sources
  • 📚 Comprehensive exploration of Terraform modules
  • 🐳 Docker-ready for easy deployment
  • 🔧 Configurable for various development environments

Use Cases

  • A DevOps engineer quickly searches for and compares multiple Terraform modules to find the best fit for a new cloud infrastructure project.
  • A developer automates the process of keeping Terraform provider documentation up-to-date in their internal wiki by regularly fetching the latest information.
  • A team lead analyzes usage patterns and popularity of various Terraform resources across their organization to make informed decisions about standardization.
  • An IaC consultant uses the server to rapidly gather detailed information about specific provider resources, streamlining the process of crafting custom Terraform configurations for clients.

How to Use It

1. Installation:

    • Ensure Docker is installed and running on your system.
    • Add the following configuration to your VS Code User Settings (JSON) file:
    {
     "mcp": {
       "servers": {
         "terraform": {
           "command": "docker",
           "args": [
             "run",
             "-i",
             "--rm",
             "hashicorp/terraform-mcp-server"
           ]
         }
       }
     }
    }
    • Alternatively, add a similar configuration (without the mcp key) to .vscode/mcp.json in your workspace.

    2. The server provides four main tools across two toolsets:

      Providers Toolset:

      • resolveProviderDocID: Finds available documentation for a specific provider.
      • getProviderDocs: Fetches complete documentation for a provider resource, data source, or function.

      Modules Toolset:

      • searchModules: Searches for Terraform modules based on specified queries.
      • moduleDetails: Retrieves detailed documentation for a specific module.

      3. Building from source:

      If you prefer not to use Docker:

        • Clone the repository: git clone https://github.com/hashicorp/terraform-mcp-server.git
        • Navigate to the directory: cd terraform-mcp-server
        • Build the binary: make build
        • Update your server configuration to use the built executable.

        Local Docker image build:

          • Clone the repository as above.
          • Build the Docker image: make docker-build
          • Update your server configuration to use the local image.

          FAQs

          Q: How does the Terraform MCP Server differ from the standard Terraform Registry?
          A: The Terraform MCP Server provides programmatic access to Terraform Registry data, allowing for automation and integration into development workflows. It’s designed for querying and retrieving information rather than hosting modules or providers.

          Q: Can I use this server to publish my own Terraform modules?
          A: No, this server is focused on retrieving and analyzing existing data from the Terraform Registry. For publishing modules, you should use the standard Terraform Registry interfaces.

          Q: Is there a limit to how many queries I can make to the server?
          A: The server itself doesn’t impose limits, but it’s subject to the Terraform Registry’s rate limits. Be mindful of your usage, especially in automated scenarios.

          Q: How frequently is the data updated?
          A: The server fetches data in real-time from the Terraform Registry, so the information is as current as the registry itself.

          Latest MCP Servers

          CVE

          An MCP Server that connects Claude to 27 security tools for CVE triage, EPSS checks, KEV status, exploit lookup, and package scanning.

          WebMCP

          webmcp is an MCP server that connects MCP clients to web search, page fetching, and local LLM-based extraction. It’s ideal…

          Google Meta Ads GA4

          An MCP server that connects AI assistants to Google Ads, Meta Ads, and GA4 for reporting, edits, and cross-platform analysis.

          View More MCP Servers >>

          Featured MCP Servers

          Notion

          Notion's official MCP Server allows you to interact with Notion workspaces through the Notion API.

          Claude Peers

          An MCP server that enables Claude Code instances to discover each other and exchange messages instantly via a local broker daemon with SQLite persistence.

          Excalidraw

          Excalidraw's official MCP server that streams interactive hand-drawn diagrams to Claude, ChatGPT, and VS Code with smooth camera control and fullscreen editing.

          More Featured MCP Servers >>

          FAQs

          Q: What exactly is the Model Context Protocol (MCP)?

          A: MCP is an open standard, like a common language, that lets AI applications (clients) and external data sources or tools (servers) talk to each other. It helps AI models get the context (data, instructions, tools) they need from outside systems to give more accurate and relevant responses. Think of it as a universal adapter for AI connections.

          Q: How is MCP different from OpenAI's function calling or plugins?

          A: While OpenAI's tools allow models to use specific external functions, MCP is a broader, open standard. It covers not just tool use, but also providing structured data (Resources) and instruction templates (Prompts) as context. Being an open standard means it's not tied to one company's models or platform. OpenAI has even started adopting MCP in its Agents SDK.

          Q: Can I use MCP with frameworks like LangChain?

          A: Yes, MCP is designed to complement frameworks like LangChain or LlamaIndex. Instead of relying solely on custom connectors within these frameworks, you can use MCP as a standardized bridge to connect to various tools and data sources. There's potential for interoperability, like converting MCP tools into LangChain tools.

          Q: Why was MCP created? What problem does it solve?

          A: It was created because large language models often lack real-time information and connecting them to external data/tools required custom, complex integrations for each pair. MCP solves this by providing a standard way to connect, reducing development time, complexity, and cost, and enabling better interoperability between different AI models and tools.

          Q: Is MCP secure? What are the main risks?

          A: Security is a major consideration. While MCP includes principles like user consent and control, risks exist. These include potential server compromises leading to token theft, indirect prompt injection attacks, excessive permissions, context data leakage, session hijacking, and vulnerabilities in server implementations. Implementing robust security measures like OAuth 2.1, TLS, strict permissions, and monitoring is crucial.

          Q: Who is behind MCP?

          A: MCP was initially developed and open-sourced by Anthropic. However, it's an open standard with active contributions from the community, including companies like Microsoft and VMware Tanzu who maintain official SDKs.

          Get the latest & top AI tools sent directly to your email.

          Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!