Scroll through any showcase of AI-generated SaaS products and you'll notice the same interface appearing over and over: rounded cards, blue primary buttons, gray body text on white backgrounds. It's competent UI. It's also completely indistinguishable from the product next to it.
This isn't an aesthetic problem. It's a structural one—and understanding why it happens points directly to the fix.
AI Generates by Frequency, Not Intention
When you ask an AI to create a user dashboard, the model generates based on what appears most often in its training data. That means rounded corners, because nearly every modern UI uses them. Blue primary colors, because they dominate SaaS products in open-source repositories. Tailwind utility classes, because they're ubiquitous in publicly available code.
The model has no access to your brand personality, your color palette, your spacing philosophy, or your component library. It's averaging across thousands of codebases and producing the statistical center of that distribution. That center is generic by definition.
Consider a hardcoded color in a Tailwind class like text-gray-700. A human designer knows what that color represents in the brand system, when it's appropriate, and how it relates to nearby elements. The AI sees a color value that correlates statistically with body text. It applies the token mechanically rather than semantically. The visual result is correct but meaningless.
Why Pasting Your Design System Doesn't Work
The intuitive fix is to paste your design tokens or component documentation into the prompt. This helps, but it has serious limits.
Design token documentation consumes substantial context. A thorough token set with semantic names, usage guidelines, and constraints can run into thousands of tokens before you've written a single line of your actual request. As conversations grow longer, the model's attention to tokens from earlier in the context decreases.
More critically, this approach requires manual maintenance. Every new conversation starts fresh. If your design system evolves, you need to update every prompt template. And even with perfectly documented tokens, the model must infer how to apply them to each specific generation decision—an inference that introduces error.
What's needed isn't more text in the prompt. It's a real-time, programmatic connection to the design system.
The MCP Solution
FramingUI connects your design system to Claude Code through the Model Context Protocol. The setup is a single entry in .mcp.json:
{
"mcpServers": {
"framingui": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@framingui/mcp-server@latest"]
}
}
}
With this connection active, Claude can query your design system as part of the generation process. It asks which themes are available, what components exist, what variants a component supports, and what CSS variables those variants use. The answers come from the actual design system catalog—not from training data interpolation.
When a model with MCP access generates a profile card, it doesn't guess that rounded corners and blue buttons are appropriate. It queries the component catalog, retrieves the available card variants, and generates code that references the semantic CSS variables your theme defines. var(--background-page) instead of #ffffff. var(--foreground-accent) instead of bg-blue-500.
What Changes in Practice
The most immediate change is elimination of hardcoded values. AI-generated components stop containing color hex codes or arbitrary Tailwind utilities that have no relationship to your actual design system. Every value traces back to a CSS variable that your theme controls.
The second change is variant accuracy. Because the model queries actual component definitions rather than inferring from training data, it references variants that exist. A button component in FramingUI has documented variants with documented props. The model generates those variants correctly because it read the specification rather than guessing from pattern.
The third change is cross-session consistency. Because the design system lives in the MCP server rather than in prompt text, every conversation starts with the same reliable source. Session one and session ten produce components that reference the same token set. Visual drift between generated components decreases substantially.
The Prerequisite: Your Design System in Machine-Readable Form
None of this works if your design system only exists as a Figma file or a documentation website. The design system needs to be accessible to the MCP server.
FramingUI approaches this by providing six production-ready themes—classic-magazine, dark-boldness, minimal-workspace, neutral-workspace, pebble, and square-minimalism—each fully structured for MCP consumption. These themes include semantic token definitions, component catalogs, and variant metadata that the AI can query at generation time.
The practical path is to pick the theme that most closely matches your visual direction, authenticate (framingui-mcp login), and let the MCP server connect your editor to that structured design system. The AI then generates components that reflect your chosen aesthetic rather than the statistical average of all aesthetics.
Generic UI isn't an inevitable product of AI generation. It's a product of generation without context. Provide the context through a machine-readable design system, and the output starts looking like it belongs to your product specifically.