AI generates UI fast. But generated code that ignores your design tokens sends you back into manual cleanup for hours after every session.
Model Context Protocol (MCP) solves this by letting AI tools query structured context from external systems — including your design system — instead of guessing from whatever examples appear in the training data.
What Changes When You Connect a Design System Over MCP
Without MCP, you paste your token guidelines into each prompt and hope the AI follows them consistently. With MCP, the AI queries your actual token data on demand.
The shift is concrete. Generated components reference your semantic color names instead of raw Tailwind palette values. Spacing uses your scale. Component variants match the contracts your primitives actually expose. You stop patching bg-blue-500 back to bg-primary after each generation session.
This is most valuable for teams that use Claude or Cursor daily for frontend work. The overhead of repeated prompt-based context injection accumulates fast.
What the Architecture Looks Like
The setup has four parts:
- Token data lives in machine-readable files inside your repo
- An MCP server exposes those tokens and component metadata as queryable resources
- AI clients (Claude, Cursor) call the MCP server during generation tasks
- Generated code uses retrieved design system values instead of hallucinated defaults
Your prompts stay shorter because context lives in the server, not the message.
The one prerequisite that matters most: your tokens need semantic naming before MCP can help. If your token library is blue-500 mapped to nothing, AI will still have to guess intent. Clean semantics first, then connect.
Setting Up the MCP Server for Claude
Create an MCP configuration file in your project root. For Claude Code, the file is .mcp.json:
{
"mcpServers": {
"framingui": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@framingui/mcp-server@latest"]
}
}
}
Restart Claude Code after saving. The framingui key is the server name AI tools use to route requests. The -y flag tells npx to skip confirmation prompts.
If you use FramingUI, the init command wires this automatically:
npx -y @framingui/mcp-server@latest init
This creates .mcp.json, adds the provider bootstrap, and generates a local guide — without requiring you to configure paths manually.
Setting Up the MCP Server for Cursor
Cursor uses a similar registration format. In .cursor/mcp.json or the global Cursor config:
{
"mcpServers": {
"framingui": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@framingui/mcp-server@latest"]
}
}
}
Restart Cursor after saving so it discovers the server on startup.
Verifying the Connection
Run a quick capability check in each client before relying on the integration in production.
Test prompts to use:
- "List the available design tokens"
- "Show me the semantic color tokens for buttons"
- "Build a settings form using existing primitives and semantic tokens only"
If AI cannot list tokens, check three things: the server command path, environment variable configuration, and whether your token files are valid JSON. Most connection failures trace to one of these.
Prompting with MCP Context
MCP delivers context, but you still need to constrain the output through the prompt. The most effective pattern:
- Use existing primitives only
- Use semantic tokens only — no raw palette values
- Do not invent new variants
- Include loading, empty, and error states
These four constraints, combined with live token retrieval, produce consistent generation across sessions.
Three Pitfalls Worth Avoiding
Exposing raw palette values without semantic mapping. If your token context only exposes blue-500, AI still has to infer what "primary" means. Semantic mapping is where consistency comes from.
No component contracts. Token data alone is not enough. If your primitive API is vague — props that accept any string, no variant enum — AI will invent unsupported configurations.
Stale token source files. If your MCP server reads outdated files, generated code will diverge from live UI. Pin the server to the same token source your components build against.
What a Team Rollout Looks Like
The fastest path from zero to working integration is five steps:
- Audit token schema quality — confirm semantic naming exists
- Add
.mcp.jsonwith the server entry to your repo - Run three representative component generation tests
- Add a lint rule that catches hardcoded visual values
- Publish one shared prompt template the team uses for UI generation
Each step produces a measurable checkpoint. The full sequence fits inside a single sprint.
MCP design system integration is not a future optimization. For teams that use Claude or Cursor for UI work, it is what separates fast-and-noisy generation from fast-and-reviewable output.