Concept

How MCP Connects AI Tools to Your Dev Environment

Understand Model Context Protocol (MCP), how it connects AI tools to your development environment, and why it matters for AI-assisted coding workflows.

FramingUI Team5 min read

Every AI coding session starts from zero. Paste in your schema. Describe your folder structure. Copy-paste the error. Explain the auth flow again. It works, but it's friction you're adding manually every single time.

Model Context Protocol (MCP) removes that friction. It's an open standard that lets AI assistants connect to your actual development environment—files, databases, APIs—and read them directly, rather than waiting for you to relay information through the chat.

The Architecture

MCP uses a client-server model with three parts: an MCP server that runs locally and exposes resources through a defined interface, an MCP client (Claude Code, Cursor, or any compatible tool) that requests those resources, and the protocol itself—standardized JSON-RPC messages that define how requests and responses are structured.

┌──────────────┐        MCP Protocol         ┌──────────────┐
│              │◄───────────────────────────►│              │
│  AI Client   │  Request: "read routes.ts"  │  MCP Server  │
│ (Claude Code)│  Response: file contents    │ (filesystem) │
│              │                             │              │
└──────────────┘                             └──────────────┘

The key property is standardization. Before MCP, every AI tool that wanted to read your filesystem needed a custom integration. Every tool that wanted to query a database needed its own plugin. MCP makes the interface generic—any MCP server works with any MCP client. Write the server once, use it from any compatible tool.

Why This Matters in Practice

The difference shows up clearly in a concrete example. You ask an AI to add authentication middleware to your API.

Without MCP, the AI generates something generic—standard Express.js JWT middleware, because that's what most training data looks like. Your project uses Fastify with a custom session plugin. The generated code doesn't fit. You explain your setup, the AI regenerates, you iterate.

With MCP, the AI reads your route definitions before generating anything. It sees your Fastify setup, your existing plugin structure, and the places authentication needs to be applied. The first draft fits your architecture.

The quality improvement compounds across a session. Every follow-up question gets a more accurate answer because the AI isn't working from a description—it's working from the actual files.

Connecting Claude Code to Your Filesystem

Claude Code supports MCP natively. The configuration lives in .mcp.json at your project root (check this file into version control so teammates get the same setup automatically):

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
    }
  }
}

After restarting Claude Code, it can list your files, read their contents, and use that context when generating or modifying code. No more manual paste-and-explain.

Connecting to a Database

Schema explanations are one of the biggest sources of context overhead in AI-assisted development. With a database MCP server, the AI queries your schema directly:

npm install -g @modelcontextprotocol/server-postgres
{
  "mcpServers": {
    "database": {
      "command": "mcp-server-postgres",
      "args": ["--connection", "postgresql://localhost/mydb"]
    }
  }
}

"What foreign keys does the orders table have?" becomes a direct query, not a description task. The AI runs read-only SQL and gives answers grounded in your actual schema.

Multiple Servers Simultaneously

An AI client can hold connections to multiple MCP servers at once. A typical setup might include a filesystem server for code, a database server for schema queries, and a Git server for repository context. Each server stays focused on one domain; the AI orchestrates them when answering complex questions that span multiple concerns.

Building a Custom Server

The real power of MCP is extensibility. If you have internal APIs, proprietary tooling, or documentation that lives in non-standard formats, you can expose them through a custom server.

A minimal example exposing project documentation:

import { Server } from '@modelcontextprotocol/sdk';

const server = new Server({ name: 'docs-server', version: '1.0.0' });

server.resource('docs://architecture', async () => ({
  contents: await readFile('./docs/architecture.md'),
  mimeType: 'text/markdown',
}));

server.listen();

Configure this the same way as any other server. Now the AI can request docs://architecture directly, rather than waiting for you to paste the relevant section.

MCP for Design Systems

For frontend developers, a design system MCP server closes the gap between token definitions and generated code. FramingUI ships @framingui/mcp-server for exactly this purpose.

Setup takes one command:

npx -y @framingui/mcp-server@latest init

This adds the server to your project's .mcp.json with the correct configuration:

{
  "mcpServers": {
    "framingui": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@framingui/mcp-server@latest"]
    }
  }
}

Once connected, Claude Code queries your design tokens before generating UI components. Generated code uses your color tokens, spacing scale, and typography definitions rather than Tailwind defaults. The first draft of a component matches your design system without manual correction.

Security Considerations

MCP servers run with your user account's permissions. A few practices keep things safe: scope filesystem access to specific project directories, use read-only database connections for AI query access, and review the code of any MCP server you run rather than trusting it blindly. The official @modelcontextprotocol servers are open source and well-audited. Third-party servers deserve scrutiny before you grant them file or database access.

MCP vs Other Context Approaches

Copy-pasting context into chat is manual and has no upper bound on how much you forget to include. RAG (retrieval-augmented generation) is useful for large static knowledge bases but requires indexing infrastructure and can serve stale content. MCP provides real-time, structured access—the AI sees the current state of your files and database, not a snapshot from last week's index run.

The tradeoff is setup overhead. MCP requires running servers. For occasional AI use, copy-paste is fine. For daily AI-assisted development work, the setup cost pays back within hours.


Resources:

Ready to build with FramingUI?

Build consistent UI with AI-ready design tokens. No more hallucinated colors or spacing.

Try FramingUI
Share

Related Posts