Why Lighthouse Scores Lie (And How to Make Them Tell the Truth)
A 100/100 in a lab environment means nothing if your real users are on a mid-range Android device on a 4G connection in a busy city. The gap between lab scores and field data is where most performance work gets lost.
We start every performance engagement by looking at CrUX data — real user measurements from Chrome — before touching a single line of code. The lab is for iteration. The field is for truth.
LCP: The One That Moves the Needle Most
Largest Contentful Paint is almost always the hero image or the above-the-fold heading. The fix is almost always the same: preload the LCP resource, use next/image with priority, and make sure your server response time is under 200ms.
The subtle killer is render-blocking resources. A single third-party script loaded synchronously in the head can add 400ms to your LCP. Audit your script loading strategy before anything else.
“Every millisecond of LCP improvement is a direct conversion rate improvement. This is not a vanity metric.”
CLS: The Silent UX Killer
Cumulative Layout Shift is the metric that makes users feel like the page is broken even when it loads fast. The culprits are almost always images without explicit dimensions, dynamically injected content above the fold, and web fonts that cause text reflow.
The fix for fonts is font-display: optional or preloading your font files. The fix for images is always specifying width and height. The fix for dynamic content is reserving space with skeleton loaders or min-height constraints.
