Vercel JSON-Render: The Complete Guide to Building AI-Generated UIs in 2026
Learn how to use Vercel's open-source JSON-Render framework to let AI models generate type-safe, streaming React UIs from natural language prompts. Includes practical code examples, Zod schemas, and production patterns.
What if your AI could generate entire user interfaces — not just text, but real, interactive React components — streamed progressively as the model thinks? That's exactly what Vercel's JSON-Render does, and it's changing how we think about frontend development.
Since its open-source launch in January 2026, JSON-Render has accumulated over 13,000 GitHub stars and 200+ releases. It ships with renderers for React, Vue, Svelte, Solid, and even React Native. And unlike previous attempts at "generative UI," it solves the fundamental trust problem: the AI never writes raw code — it generates constrained JSON that maps to your pre-approved component catalog.
In this guide, I'll walk you through everything you need to build AI-generated interfaces with JSON-Render: from understanding the architecture, to defining Zod schemas, to streaming UIs in a Next.js app. Let's dive in.
Why JSON-Render Matters for Frontend Developers
We've all seen AI generate code. But code generation has a trust problem: an LLM can output anything, including malicious scripts, broken imports, or components that crash your app. JSON-Render takes a fundamentally different approach:
- You define the rules. A catalog of permitted components and actions, described with Zod schemas, constrains what the AI can generate.
- The AI generates JSON, not code. A flat JSON tree of typed elements that reference only your catalog entries.
- The Renderer maps JSON to real components. Your React (or Vue/Svelte/Solid) components handle the actual rendering.
- Streaming is built-in. The UI renders progressively as the model streams its response — no waiting for the full output.
As Vercel CEO Guillermo Rauch put it, the approach "plugs the AI directly into the rendering layer." It's a constraint-based system that turns the unpredictability of LLMs into deterministic, type-safe UI composition.
Architecture Overview: How JSON-Render Works
The pipeline is straightforward:
- Define your component catalog — Zod schemas describing every component the AI is allowed to use, along with their props.
- Send a prompt to your LLM — Include the catalog schema in your system prompt so the model knows what it can generate.
- LLM outputs constrained JSON — A flat tree of typed elements referencing catalog entries.
- Renderer maps JSON to React components — The
<Renderer>component progressively renders the output.
Here's a visual of the flow:
User Prompt → LLM + Catalog Schema → JSON Spec → Renderer → React Components
↓
Streamed progressivelyGetting Started: Installation and Setup
JSON-Render uses a monorepo structure published under the @json-render scope on npm. Here's how to set up a Next.js project:
# Create a new Next.js app (or use an existing one)
npx create-next-app@latest my-generative-ui --typescript --app
cd my-generative-ui
# Install JSON-Render core + React renderer
pnpm add @json-render/core @json-render/react
# Install the shadcn/ui preset (optional — 36 pre-built components)
pnpm add @json-render/shadcn
# Install Zod for schema definitions
pnpm add zod
# Install Vercel AI SDK for streaming
pnpm add ai @ai-sdk/openaiStep 1: Define Your Component Catalog
The catalog is the heart of JSON-Render. Each entry defines a component type, its props (as a Zod schema), and whether it accepts children. Here's a practical example:
// lib/catalog.ts
import { z } from "zod";
import { createCatalog } from "@json-render/core";
export const catalog = createCatalog({
Heading: {
props: z.object({
level: z.enum(["h1", "h2", "h3"]),
text: z.string(),
}),
},
Paragraph: {
props: z.object({
text: z.string(),
}),
},
Button: {
props: z.object({
label: z.string(),
variant: z.enum(["primary", "secondary", "destructive"]).default("primary"),
action: z.string().describe("Action identifier to trigger on click"),
}),
},
Card: {
props: z.object({
title: z.string(),
description: z.string().optional(),
}),
children: true,
},
CodeBlock: {
props: z.object({
language: z.string(),
code: z.string(),
}),
},
DataTable: {
props: z.object({
columns: z.array(z.object({
key: z.string(),
label: z.string(),
})),
rows: z.array(z.record(z.string())),
}),
},
Alert: {
props: z.object({
type: z.enum(["info", "warning", "error", "success"]),
message: z.string(),
}),
},
});Notice how each component's props are fully typed with Zod. The LLM receives this schema and can only generate JSON that conforms to it. If the model tries to output a <script> tag or an unknown component — the renderer simply ignores it.
Step 2: Map Catalog Entries to React Components
Next, create the actual React components that the Renderer will use:
// components/catalog-components.tsx
"use client";
import type { ComponentMap } from "@json-render/react";
import type { catalog } from "@/lib/catalog";
export const components: ComponentMap<typeof catalog> = {
Heading: ({ level, text }) => {
const Tag = level as keyof JSX.IntrinsicElements;
return <Tag className="font-bold tracking-tight">{text}</Tag>;
},
Paragraph: ({ text }) => (
<p className="text-gray-700 leading-relaxed">{text}</p>
),
Button: ({ label, variant, action }) => (
<button
className={`px-4 py-2 rounded-lg font-medium ${
variant === "primary"
? "bg-blue-600 text-white hover:bg-blue-700"
: variant === "destructive"
? "bg-red-600 text-white hover:bg-red-700"
: "bg-gray-200 text-gray-800 hover:bg-gray-300"
}`}
onClick={() => console.log("Action:", action)}
>
{label}
</button>
),
Card: ({ title, description, children }) => (
<div className="rounded-xl border bg-white p-6 shadow-sm">
<h3 className="text-lg font-semibold">{title}</h3>
{description && <p className="text-sm text-gray-500 mt-1">{description}</p>}
<div className="mt-4">{children}</div>
</div>
),
CodeBlock: ({ language, code }) => (
<pre className="bg-gray-900 text-gray-100 rounded-lg p-4 overflow-x-auto">
<code className={`language-${language}`}>{code}</code>
</pre>
),
DataTable: ({ columns, rows }) => (
<div className="overflow-x-auto">
<table className="min-w-full divide-y divide-gray-200">
<thead className="bg-gray-50">
<tr>
{columns.map((col) => (
<th key={col.key} className="px-4 py-3 text-left text-sm font-medium">
{col.label}
</th>
))}
</tr>
</thead>
<tbody className="divide-y divide-gray-200">
{rows.map((row, i) => (
<tr key={i}>
{columns.map((col) => (
<td key={col.key} className="px-4 py-3 text-sm">{row[col.key]}</td>
))}
</tr>
))}
</tbody>
</table>
</div>
),
Alert: ({ type, message }) => {
const styles = {
info: "bg-blue-50 border-blue-200 text-blue-800",
warning: "bg-yellow-50 border-yellow-200 text-yellow-800",
error: "bg-red-50 border-red-200 text-red-800",
success: "bg-green-50 border-green-200 text-green-800",
};
return (
<div className={`rounded-lg border p-4 ${styles[type]}`}>{message}</div>
);
},
};Step 3: Build the API Route for AI Generation
Now wire up a Next.js API route that sends the catalog schema to your LLM and streams the response:
// app/api/generate-ui/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
import { catalogToPrompt } from "@json-render/core";
import { catalog } from "@/lib/catalog";
export async function POST(req: Request) {
const { prompt } = await req.json();
const catalogPrompt = catalogToPrompt(catalog);
const result = streamText({
model: openai("gpt-4o"),
system: `You are a UI generator. Given a user request, generate a JSON
UI specification using ONLY the following component catalog.
Output valid JSON matching the schema — nothing else.
${catalogPrompt}`,
prompt,
});
return result.toDataStreamResponse();
}The catalogToPrompt() utility from @json-render/core serializes your Zod schemas into a format the LLM can understand. This is critical — it means the model sees your exact component API, including prop types, defaults, and descriptions.
Step 4: Build the Frontend with Streaming Renderer
Finally, create the page that ties everything together:
// app/page.tsx
"use client";
import { useState } from "react";
import { useChat } from "ai/react";
import { Renderer } from "@json-render/react";
import { catalog } from "@/lib/catalog";
import { components } from "@/components/catalog-components";
export default function GenerativeUIPage() {
const [uiSpec, setUiSpec] = useState<any>(null);
const { messages, input, handleInputChange, handleSubmit, isLoading } =
useChat({
api: "/api/generate-ui",
onFinish: (message) => {
try {
const parsed = JSON.parse(message.content);
setUiSpec(parsed);
} catch (e) {
console.error("Failed to parse UI spec", e);
}
},
});
return (
<div className="max-w-4xl mx-auto p-8">
<h1 className="text-3xl font-bold mb-6">AI UI Generator</h1>
<form onSubmit={handleSubmit} className="flex gap-3 mb-8">
<input
value={input}
onChange={handleInputChange}
placeholder="Describe the UI you want to build..."
className="flex-1 rounded-lg border px-4 py-2"
/>
<button
type="submit"
disabled={isLoading}
className="bg-black text-white px-6 py-2 rounded-lg
disabled:opacity-50 transition-opacity"
>
{isLoading ? "Generating..." : "Generate"}
</button>
</form>
{uiSpec && (
<div className="rounded-xl border p-6 bg-gray-50">
<Renderer
catalog={catalog}
components={components}
spec={uiSpec}
fallback={({ type }) => (
<div className="text-red-500">Unknown: {type}</div>
)}
/>
</div>
)}
</div>
);
}Production Patterns: Beyond the Basics
Pattern 1: Action Handlers with a Registry
Your Button component accepts an action string, but how do you execute those actions? Create an action registry:
// lib/action-registry.ts
type ActionHandler = (params?: Record<string, unknown>) => void | Promise<void>;
const registry = new Map<string, ActionHandler>();
export function registerAction(name: string, handler: ActionHandler) {
registry.set(name, handler);
}
export function executeAction(name: string, params?: Record<string, unknown>) {
const handler = registry.get(name);
if (!handler) {
console.warn(`No handler registered for action: ${name}`);
return;
}
return handler(params);
}
registerAction("navigate-home", () => window.location.href = "/");
registerAction("open-settings", () => window.location.href = "/settings");
registerAction("submit-feedback", async (params) => {
await fetch("/api/feedback", {
method: "POST",
body: JSON.stringify(params)
});
});Pattern 2: Using the shadcn/ui Preset
Don't want to build components from scratch? JSON-Render ships with 36 pre-built shadcn/ui components:
// Using the pre-built preset instead of custom components
import { shadcnCatalog, shadcnComponents } from "@json-render/shadcn";
// In your Renderer:
<Renderer
catalog={shadcnCatalog}
components={shadcnComponents}
spec={uiSpec}
/>This gives you instant access to Cards, Tables, Forms, Dialogs, Tabs, and more — all styled with shadcn/ui's Tailwind CSS + Radix UI in a Micro-Frontend Architecture">design system. It's the fastest path to a working prototype.
Pattern 3: Combining with Server Components
For pages where the AI-generated UI is part of a larger layout, use React Server Components for the static shell and client components for the dynamic AI portion:
// app/dashboard/page.tsx (Server Component)
import { DynamicUIPanel } from "./dynamic-ui-panel";
export default async function DashboardPage() {
const stats = await fetchDashboardStats();
return (
<div className="grid grid-cols-12 gap-6">
<aside className="col-span-3">
<nav>{/* Navigation items */}</nav>
</aside>
<main className="col-span-9">
<DynamicUIPanel initialContext={stats} />
</main>
</div>
);
}JSON-Render vs Google A2UI: Which Should You Choose?
Google released a comparable project called A2UI (Agent-to-User Interface) in late 2025. While both follow the same high-level pipeline — AI → JSON → Component Catalog → UI — they solve different problems:
- JSON-Render is a tool tightly coupled to a specific application's component set. It's ideal when you control the full stack and want maximum type-safety within your app.
- A2UI positions itself as a protocol for cross-agent interoperability. Think of it as the REST of generative UI — better for multi-agent systems where different services need to compose UIs together.
For most Next.js developers building a single application, JSON-Render is the pragmatic choice. If you're building a platform where third-party agents need to generate UIs for your system, A2UI's protocol approach may be more appropriate. You can also read our guide on building AI chatbots with the Vercel AI SDK for more context on AI-powered streaming interfaces.
Performance Considerations
A few things to keep in mind when deploying JSON-Render in production:
- Bundle size: The core renderer adds roughly 8KB gzipped. The shadcn preset is larger (~45KB) since it includes 36 components — tree-shake what you don't need.
- Streaming latency: First meaningful paint depends on your LLM's time-to-first-token. Edge-deployed API routes on Vercel help minimize this. See our Next.js performance optimization guide for more tips.
- Validation overhead: Zod validation runs on each streamed chunk. For complex catalogs with 50+ components, consider using
z.lazy()for recursive structures. - Caching: If the same prompts generate similar UIs, cache the JSON specs in Redis or Vercel KV to avoid redundant LLM calls.
Common Pitfalls and How to Avoid Them
After building several projects with JSON-Render, here are the mistakes I've seen most often:
- Catalog too large: If your catalog has 100+ components, the LLM's system prompt becomes enormous, eating into your context window and slowing generation. Keep it focused — 15-30 components is the sweet spot.
- Missing prop descriptions: Zod's
.describe()method is your best friend. Without descriptions, the LLM has to guess whatvariantoractionmean. Be explicit. - Not handling partial JSON: During streaming, the JSON is incomplete. The Renderer handles this gracefully, but if you're parsing manually, use a streaming JSON parser like
@streamparser/json. - Ignoring the fallback prop: Always provide a
fallbackrenderer. LLMs occasionally hallucinate component types not in your catalog. A graceful fallback prevents the entire UI from breaking.
What This Means for Frontend Development
JSON-Render represents a genuine shift in how we think about UI development. As one Reddit commenter observed, "we've been moving toward constraint-based systems for years — design tokens, component libraries, Storybook configs. This just pushes that boundary further into runtime composition instead of build-time authoring."
The role of the frontend developer isn't disappearing — it's evolving. Instead of hand-coding every screen, you're designing the possibility space: the catalog of components, their variants, their constraints. The AI handles composition. You handle quality, design systems, and user experience.
That's not less work. It's different work. And arguably more interesting.
Getting Started Today
If you want to experiment with JSON-Render:
- Check out the official documentation and interactive playground.
- Browse the GitHub repository for examples.
- Start with the shadcn/ui preset for the fastest path to a working prototype.
- Define a small, focused catalog (5-10 components) and iterate from there.
The future of frontend is AI-composed, developer-constrained interfaces. JSON-Render is the best tool we have today to build that future responsibly.
Admin
Fillout
Powerful form builder for devs — integrates with Next.js, React, Notion, Airtable. Save hours of coding.
Dub.co
Short links & analytics for developers — track clicks, create branded links, manage affiliate URLs with ease.
Comments (0)
Sign in to comment
No comments yet. Be the first to comment!