Optimizing Core Web Vitals from 60 to 95: A Real Next.js Case Study
A real-world deep dive into optimizing Core Web Vitals on a production Next.js 14 app — from a PageSpeed score of 60 to 95. Covers LCP, INP, CLS diagnostics, next/image, next/font, bundle splitting, ISR vs SSG, and React 18 performance patterns with before/after metrics.
The Starting Point: A Score of 60 and a Frustrated Team
It started with a complaint from marketing: "Our landing page feels slow." PageSpeed Insights confirmed it — Performance score: 60. That's not catastrophic, but for a Next.js app that was supposed to be fast by default, it stung.
This article documents the real journey of diagnosing and fixing Core Web Vitals on a production Next.js 14 app — a content-heavy blog platform with dynamic routes, third-party scripts, custom fonts, and server-side data fetching. By the end, we hit a 95 on mobile and 98 on desktop. Here's exactly how.
Understanding the Three Metrics That Actually Matter
Before touching a single line of code, you need to understand what Google's Core Web Vitals are actually measuring — and why a bad score in each one feels different to users.
LCP — Largest Contentful Paint
LCP measures how long it takes for the largest visible element (usually a hero image or an <h1>) to render. Google's threshold: good = under 2.5s, needs improvement = under 4s, poor = above 4s.
Our initial LCP was 4.1 seconds. The culprit? A 1.2MB hero image loaded with a plain <img> tag, no preloading, no sizing hints.
INP — Interaction to Next Paint
INP (which replaced FID in 2024) measures the latency between a user interaction and the next frame paint. Good = under 200ms. Our worst INP was 480ms on the article listing page — caused by a filter component re-rendering the entire list on every keystroke.
CLS — Cumulative Layout Shift
CLS measures visual stability. A score above 0.1 is "needs improvement." We had 0.28 — visible layout jumps caused by fonts loading late and images without explicit dimensions.
The key insight: each metric has a different root cause and requires a different fix. Treating them all as "make the site faster" is too vague to be useful.
Step 1: Diagnosing with Real Tools
Don't guess. Before touching code, we ran a proper diagnostic pass using three tools:
- PageSpeed Insights (field data from CrUX + lab data from Lighthouse)
- Chrome DevTools Performance panel (flame charts, long tasks, layout shift regions)
- Next.js Bundle Analyzer (
@next/bundle-analyzer) for JavaScript weight
The bundle analyzer revealed something immediately: moment.js (67KB gzipped) was being imported by a date formatting utility used in exactly two places. We also had lodash imported as a full bundle instead of individual functions.
Step 2: Fixing LCP — Image Optimization That Actually Works
The hero image fix was the single biggest win. Here's what we changed:
Before
// Old: plain img tag, no optimization
<img src="/hero.jpg" alt="Hero" />
After
import Image from 'next/image';
// next.config.js
const nextConfig = {
images: {
formats: ['image/avif', 'image/webp'],
deviceSizes: [640, 750, 828, 1080, 1200, 1920],
minimumCacheTTL: 60 * 60 * 24 * 30, // 30 days
},
};
// Component
<Image
src="/hero.jpg"
alt="Hero"
width={1200}
height={630}
priority // <-- preloads this image
quality={85}
placeholder="blur"
blurDataURL={heroBlurDataURL}
/>
Three things matter here:
priorityadds a<link rel="preload">in the document<head>, telling the browser to fetch this image before parsing the rest of the page.- Setting explicit
widthandheightreserves layout space, which directly fixes CLS. - The
placeholder="blur"shows a low-quality preview immediately, improving perceived performance.
Result: LCP dropped from 4.1s to 1.8s. That single change moved LCP from "poor" to "good."
Generating Blur Placeholders at Build Time
// lib/imageUtils.ts
import { getPlaiceholder } from 'plaiceholder';
import fs from 'fs';
import path from 'path';
export async function getBlurDataURL(imagePath: string) {
const buffer = fs.readFileSync(path.join(process.cwd(), 'public', imagePath));
const { base64 } = await getPlaiceholder(buffer);
return base64;
}
// In getStaticProps or generateStaticParams:
export async function generateMetadata({ params }) {
const post = await getPost(params.slug);
const blurDataURL = await getBlurDataURL(post.coverImage);
return { blurDataURL };
}
Step 3: Font Loading — The Hidden CLS Killer
Custom fonts are a CLS trap. When the browser loads a page, it renders text with a fallback system font, then swaps to the custom font when it loads. That swap shifts layout. Our app used Google Fonts loaded via a <link> tag in _document.tsx.
The Fix: next/font
// Before (in _document.tsx) — WRONG
<link
href="https://fonts.googleapis.com/css2?family=Inter:wght@400;600;700&display=swap"
rel="stylesheet"
/>
// After (in app/layout.tsx) — CORRECT
import { Inter } from 'next/font/google';
const inter = Inter({
subsets: ['latin'],
display: 'swap',
variable: '--font-inter',
preload: true,
});
export default function RootLayout({ children }) {
return (
<html lang="en" className={inter.variable}>
<body>{children}</body>
</html>
);
}
next/font downloads the font at build time, self-hosts it, and injects the correct font-size-adjust and size-adjust CSS properties to match the fallback font's metrics. This eliminates layout shift on font swap entirely.
CLS improvement from fonts alone: 0.28 → 0.09.
Step 4: Bundle Splitting and Removing Dead Weight
The bundle analyzer showed our initial JavaScript parse cost was ~420KB gzipped. Here's what we cut:
Replacing moment.js
// Before
import moment from 'moment';
const formatted = moment(date).format('MMMM D, YYYY');
// After — date-fns with tree shaking
import { format } from 'date-fns';
const formatted = format(new Date(date), 'MMMM d, yyyy');
// Saves ~67KB gzipped
Replacing lodash with native methods or cherry-picks
// Before
import _ from 'lodash';
const grouped = _.groupBy(posts, 'category');
// After — native ES2019
const grouped = Object.groupBy(posts, ({ category }) => category);
// Or for older targets:
import groupBy from 'lodash/groupBy'; // cherry-picked, ~2KB vs 24KB
Dynamic Imports for Heavy Components
import dynamic from 'next/dynamic';
// Heavy markdown editor — only needed on /admin routes
const MarkdownEditor = dynamic(() => import('@/components/MarkdownEditor'), {
ssr: false,
loading: () => <div className="editor-skeleton" />,
});
// Heavy syntax highlighter — defer until visible
const CodeBlock = dynamic(() => import('@/components/CodeBlock'), {
ssr: false,
});
Final bundle reduction: 420KB → 218KB gzipped (48% reduction). JavaScript parse time dropped from 1.8s to 0.9s on a mid-range mobile device.
Step 5: Fixing INP — The React Re-render Problem
Our worst INP came from the article filter component. On every keystroke in the search input, it re-rendered a list of 200+ article cards. Here's the pattern that killed us:
Before (naive implementation)
// ArticleList.tsx — SLOW
export function ArticleList({ articles }) {
const [query, setQuery] = useState('');
const filtered = articles.filter(a =>
a.title.toLowerCase().includes(query.toLowerCase())
);
return (
<>
<input value={query} onChange={e => setQuery(e.target.value)} />
{filtered.map(article => (
<ArticleCard key={article.id} article={article} />
))}
</>
);
}
After (deferred + memoized)
import { useDeferredValue, useMemo, memo } from 'react';
// Memoize expensive card renders
const ArticleCard = memo(function ArticleCard({ article }) {
return <div>...</div>;
});
export function ArticleList({ articles }) {
const [query, setQuery] = useState('');
const deferredQuery = useDeferredValue(query); // React 18 concurrent feature
const filtered = useMemo(() =>
articles.filter(a =>
a.title.toLowerCase().includes(deferredQuery.toLowerCase())
),
[articles, deferredQuery]
);
const isStale = query !== deferredQuery;
return (
<>
<input value={query} onChange={e => setQuery(e.target.value)} />
<div style={{ opacity: isStale ? 0.7 : 1, transition: 'opacity 0.2s' }}>
{filtered.map(article => (
<ArticleCard key={article.id} article={article} />
))}
</div>
</>
);
}
useDeferredValue lets React prioritize the input update (immediate) and defer the expensive filter/render (background). The visual stale state gives users feedback that filtering is in progress.
INP result: 480ms → 140ms — from "poor" to "good."
Step 6: SSG vs ISR — Choosing the Right Rendering Strategy
Not all pages have the same data freshness requirements. We audited every route type and picked the right strategy:
Static Site Generation (SSG) — for evergreen content
// app/blog/[slug]/page.tsx
export async function generateStaticParams() {
const posts = await getAllPosts();
return posts.map(post => ({ slug: post.slug }));
}
export default async function BlogPost({ params }) {
const post = await getPost(params.slug);
return <Article post={post} />;
}
Pure SSG gives the fastest possible TTFB (Time to First Byte) because Next.js serves pre-built HTML from CDN edge nodes. No database query at request time.
Incremental Static Regeneration (ISR) — for frequently updated content
// app/blog/page.tsx — article listing, updated when new posts published
export const revalidate = 3600; // Rebuild at most every hour
export default async function BlogIndex() {
const posts = await getLatestPosts();
return <ArticleList articles={posts} />;
}
ISR gives you the speed of static with the freshness of server rendering. The first request after the revalidation window triggers a background rebuild. Visitors always get a cached response — never a loading spinner.
Key decision framework: If content changes less than once a day → SSG. Changes hourly → ISR with revalidate = 3600. Changes per-request with user-specific data → Server Components + streaming.
Step 7: Third-Party Scripts Without the Performance Tax
Analytics, chat widgets, and ad scripts are CWV killers. Next.js's Script component gives you fine-grained control over when they load:
import Script from 'next/script';
// Google Analytics — load after page is interactive
<Script
src="https://www.googletagmanager.com/gtag/js?id=G-XXXXXXXX"
strategy="afterInteractive"
/>
// Chat widget — load only when browser is idle
<Script
src="https://widget.intercom.io/widget/xxxx"
strategy="lazyOnload"
/>
// Critical inline script (e.g., theme detection) — run before hydration
<Script id="theme-init" strategy="beforeInteractive">
{`
const theme = localStorage.getItem('theme') || 'light';
document.documentElement.setAttribute('data-theme', theme);
`}
</Script>
Moving our analytics from beforeInteractive to afterInteractive alone removed a 340ms blocking time from our TBT (Total Blocking Time) measurement.
The Final Numbers: Before vs After
After all optimizations, measured on PageSpeed Insights (mobile, median of 5 runs):
- Performance Score: 60 → 95
- LCP: 4.1s → 1.8s (good)
- INP: 480ms → 140ms (good)
- CLS: 0.28 → 0.04 (good)
- TBT: 890ms → 210ms
- FCP: 2.8s → 1.1s
- Speed Index: 3.4s → 1.6s
- JS Bundle (gzipped): 420KB → 218KB
The Priority Order That Matters
If you're starting from scratch on your own Next.js optimization journey, here's the order of impact from our experience:
- 1. Fix images first — Use
next/imagewithpriorityon LCP elements. Biggest single win. - 2. Switch to next/font — Eliminates font-related CLS with one import change.
- 3. Audit your bundle — Run
ANALYZE=true npm run buildand kill bloated dependencies. - 4. Choose the right rendering strategy — SSG > ISR > SSR > CSR in terms of performance. Use the slowest one you need.
- 5. Defer third-party scripts — Use
strategy="afterInteractive"for analytics,lazyOnloadfor non-critical widgets. - 6. Fix INP with React 18 patterns —
useDeferredValue,memo, andstartTransitionfor expensive renders.
Conclusion
A 60 → 95 improvement isn't magic. It's methodical diagnosis followed by targeted fixes. The biggest lesson: Next.js gives you the tools — next/image, next/font, next/script, ISR, Server Components — but you have to actually use them correctly. The defaults are good. The intentional configurations are great.
Start with PageSpeed Insights to identify which metric is hurting you most. Fix that one first. Measure again. Repeat. Core Web Vitals are a marathon, not a sprint — but with Next.js, you've already got a head start.
Admin
Cal.com
Open source scheduling — self-host your booking system, replace Calendly. Free & privacy-first.
Comments (0)
Sign in to comment
No comments yet. Be the first to comment!