Deep Dive

Agentic Design Systems Let AI Generate On-Brand UI

Agentic design systems combine design tokens with AI to generate on-brand UI—eliminating generic AI interfaces.

FramingUI Team5 min read

A designer-built design system is a reference document. Developers read it, extract values, and apply them to code by hand. This works when humans are the only ones writing UI. It breaks down when AI starts writing substantial portions of it.

An agentic design system is something different. It's a design system that AI can query programmatically—where the design decisions are encoded in machine-readable structures rather than human-readable documentation.

What Traditional Design Systems Give AI

When you tell Claude to generate a component "using your design system," the model has to infer what that means. If you've pasted token values into the prompt, it has numbers to work with. If you've linked to documentation, it may have read it during training. Either way, it's applying design decisions indirectly—interpreting human-readable text and mapping it to code.

The gap shows up in the output. The model applies tokens mechanically rather than semantically. It knows that --color-primary-500 maps to a blue value, but not that this specific blue represents trust in your brand language, should appear at most once per screen, and should pair with neutral grays rather than saturated secondary colors. Those constraints live in designer knowledge, not in the token file.

Without access to that structured knowledge, AI generates components that are visually plausible but semantically wrong for your brand.

What Makes a Design System Agentic

An agentic design system encodes three things that traditional systems leave implicit:

Token semantics. Rather than just defining that --foreground-accent maps to a specific OKLCH value, the system records what that token means: primary action color, maximum one per viewport, intended for high-emphasis interactive elements. The AI can query this semantic layer and apply tokens with intent rather than mechanically.

Component knowledge. Rather than listing available components and their props, the system records when and why to use each component—what visual impact it carries, where in a layout it belongs, which variants are appropriate for which contexts. A button component doesn't just have variant="primary" and variant="ghost". It has knowledge that primary buttons belong in modal footers and card action areas, while ghost buttons serve secondary actions that shouldn't compete visually with the primary.

Queryable access. Rather than requiring the design system to be included in every prompt, the system is exposed through a protocol that AI tools can query at generation time. The design system lives outside the conversation context and gets retrieved on demand.

The Model Context Protocol is the technical mechanism FramingUI uses for this third layer.

How the MCP Connection Works

The FramingUI MCP server exposes your design system as a set of queryable tools. Claude Code connects to it through a single configuration entry:

{
  "mcpServers": {
    "framingui": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@framingui/mcp-server@latest"]
    }
  }
}

Once connected, Claude can ask the server which themes are available and what each theme contains, which components exist and what their variants are, what CSS variables a specific variant uses, and whether a given screen definition is valid before generating code from it.

The server responds with structured data drawn from the FramingUI component catalog and your configured theme. The model uses this data to generate code that references real CSS variables—var(--background-page), var(--foreground-primary), var(--border-default)—rather than hardcoded values it inferred from training data.

The Four-Step Generation Workflow

The MCP server's screen generation workflow is designed to catch errors before they reach the code output stage. The steps are:

  1. get-screen-generation-context — given a natural language description like "analytics dashboard with user metrics," the server returns matching templates, relevant components, and the schema structure required for a valid screen definition.

  2. validate-screen-definition — the model creates a structured JSON definition and submits it for validation. The server checks that referenced components exist, variants are valid, and required fields are present. Validation failures return specific error messages and correction suggestions.

  3. generate_screen — with a validated definition, the server generates React component code that uses your theme's CSS variables and the components from the catalog.

  4. validate-environment — optionally, the server checks that your project has the required packages installed and the runtime contract configured correctly (global stylesheet import, provider mount, theme module).

This four-step flow is what separates agentic generation from prompt-based generation. The design system participates actively in the generation process, not just as a reference document.

What This Changes for Teams

For solo developers, the primary benefit is speed without drift. AI generates components that match your chosen theme on the first attempt. You're not spending time correcting hallucinated utility classes or hunting for hardcoded color values that don't match your palette.

For teams, the benefit is consistency across contributors. Whether a component is generated by a junior developer using AI assistance or by a senior developer writing by hand, the same CSS variables appear in the output. The design system isn't a social contract enforced through code review—it's a technical constraint enforced at generation time.

The agentic design system doesn't replace designer judgment. Designers still make the decisions: which theme captures the brand, which component variants best serve each interaction pattern, how the visual hierarchy should be weighted. What changes is how those decisions propagate into code. They flow through a machine-readable system rather than through documentation that AI has to interpret.

That's the practical difference between a design system built for humans and one built for AI agents.

Ready to build with FramingUI?

Build consistent UI with AI-ready design tokens. No more hallucinated colors or spacing.

Try FramingUI
Share

Related Posts