The average front-end developer spends hours per feature translating Figma specs into code, then more hours fixing the parts the translation got wrong. The bottleneck isn't skill—it's that Figma speaks in raw values while code speaks in tokens, and nothing connects the two automatically.
Design tokens are that connection. When a designer works with Figma Variables and those variables export as structured JSON, an AI assistant can read the full design system and generate components that match—without any manual pixel-checking on either side.
Why the Old Workflow Breaks
Opening Figma's Inspect panel gives you raw CSS values: #3B82F6, 24px, Inter SemiBold 16px. Those values have no context. The hex code doesn't tell a developer whether it's the primary brand color, an info badge tint, or a one-off choice from three months ago.
The developer has to guess. They pick the closest Tailwind class, estimate the spacing, and hope the result passes design review. It rarely does on the first pass. The designer responds: "That blue is wrong and the spacing is off." The cycle repeats.
The deeper problem is structural. Design lives in Figma. Code lives in the repository. Without a shared data format, the two drift apart the moment anyone makes a change on either side.
Tokens as the Bridge
Figma Variables map directly to design token structure. A color variable named color/primary/500 becomes a JSON entry with a value, a type, and optionally a description. A spacing variable named spacing/md becomes a dimension token. When you export those variables and commit the JSON, you have a single source of truth that both platforms can reference.
The workflow becomes:
- Designer maintains Figma Variables for all color, spacing, and typography decisions
- Export tokens to
tokens.jsonvia the Figma REST API or Tokens Studio - Commit
tokens.jsonto the repository - AI reads tokens via FramingUI's MCP server—no copy-pasting into prompts
When you ask Claude Code to build a card component, it queries the MCP server, sees your actual token names, and generates code using those names:
<div className="bg-surface-primary p-6 rounded-md shadow-sm">
<h3 className="text-lg font-semibold text-gray-900">Title</h3>
<p className="text-sm text-gray-600 mt-4">Content</p>
</div>
Every class maps to a token you defined. No hardcoded #3B82F6 drifting into the codebase.
Setting Up the MCP Connection
FramingUI's MCP server is the runtime layer that connects AI tools to your design system. Setup runs once per project:
npx -y @framingui/mcp-server@latest init
This command detects your framework, installs dependencies, and writes .mcp.json at the project root:
{
"mcpServers": {
"framingui": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@framingui/mcp-server@latest"]
}
}
}
Restart Claude Code. The AI can now query your tokens on demand. When your designer updates a color in Figma and exports new tokens, the next AI-generated component automatically uses the updated value—no re-configuration needed.
What Tokens Don't Cover
Design tokens capture values: colors, spacing, border radii, font families. They don't capture behavior: Figma's Auto Layout rules, component variant logic, prototype interactions, or responsive constraints.
You still need to describe layout intent to the AI. A prompt like "create a card with a fixed-width image on the left and flex-grow content on the right" gives AI the structural context that tokens can't provide. Think of tokens as the vocabulary and your prompt as the grammar.
Text styles, effect styles, and grid styles in Figma also need extra tooling to export as structured tokens—they don't map automatically from Figma Variables. Tokens Studio handles most of these, but it's worth auditing what your export actually contains before relying on it.
The Practical Shift
For designers, the change is minimal: use Figma Variables consistently instead of one-off style overrides. Export tokens when design decisions are finalized. Commit the file.
For developers, the change is more significant. Component work becomes prompt-driven. You describe what you need, the AI generates code using your tokens, and you review for architecture and accessibility rather than spending time on color lookups and spacing math.
The review step still matters. AI generates structurally correct code but needs human judgment on layout edge cases, complex state management, and accessibility details beyond ARIA label placement. The bottleneck shifts from translation to review—which is where developer time is actually valuable.