PageSpeed Insights: Lab Data vs Field Data

FindMyTeam April 11, 2026

Understand why Lighthouse lab results and real-user field data can disagree, what each one is good for, and how to use both without talking yourself into the wrong fix.

One of the easiest ways to waste time in performance work is to treat every PageSpeed number as if it came from the same place.

It does not.

PageSpeed Insights mixes lab data and field data, and they answer different questions.

Lab data is a controlled test

Lab data in PageSpeed Insights comes from Lighthouse.

It is a simulated run. That makes it great for debugging because:

  • you can rerun it
  • it gives diagnostic audits
  • it shows how a page behaves under a fixed test setup

That is useful when you want to ask, "what is likely slowing this page down right now?"

Field data is real user experience

Field data comes from the Chrome UX Report, often called CrUX.

That is aggregated real-world experience data from opted-in users. It reflects what happened across real devices, real networks, and real sessions over a trailing 28-day period.

That is useful when you want to ask, "what are actual users experiencing over time?"

Why the numbers disagree

Because they are not measuring the same thing in the same way.

Lab data:

  • uses a simulated environment
  • runs on a fixed device and network profile
  • is good for debugging

Field data:

  • reflects real user traffic
  • varies across devices, networks, and geographies
  • is better for understanding actual experience

So yes, it is perfectly possible to see a decent Lighthouse result and still have weak field data, or the other way around.

A common example

Imagine a site with:

  • a warm CDN cache during the Lighthouse run
  • decent desktop hardware in the test environment
  • mostly mobile users on patchy networks in the real world

The lab result can look respectable while field data stays ugly for weeks.

That does not mean PageSpeed is broken. It means the environment changed.

What field data is especially good at

Field data is useful for questions like:

  • do real users see poor LCP?
  • are interactions still slow in the wild?
  • is the site unstable on actual devices?
  • are users in the 75th percentile still having a rough time?

It is the better signal when you care about lived experience rather than a one-off technical snapshot.

What lab data is especially good at

Lab data is better when you want to:

  • reproduce a problem
  • inspect audits
  • spot render-blocking resources
  • see which page element is hurting LCP
  • debug why a single page looks wrong today

That is why a good workflow uses both instead of arguing about which one is "real."

Why field data can be missing

This confuses people constantly.

Field data is not guaranteed for every page or origin. If there are not enough eligible samples, you may see little or nothing.

That does not mean the page is fast. It usually means there is not enough public data to show a representative result yet.

Why lab scores move from run to run

That is also normal.

Lab runs can vary because of:

  • datacenter location
  • network conditions
  • resource contention
  • cache state
  • third-party scripts that behave differently from one run to the next

If the score wiggles a bit, do not overreact. Look for consistent patterns, not a single lucky or unlucky run.

How to use both without getting lost

Here is the workflow that holds up best:

  1. Use field data to see whether users actually have a problem
  2. Use lab data to reproduce and debug the likely cause
  3. Ship the fix
  4. Recheck field data after enough time has passed for the rolling window to catch up

That last step matters. Field data is not instant feedback.

How this maps to FindMyIP.uk

On Website Performance, the synthetic timings and any available PageSpeed snapshots help you answer the immediate operational question.

But if you are making product decisions, especially for a site with international traffic, do not stop there. Compare that result against your own analytics, RUM, and Search Console data where you have it.

Lab vs field in one sentence

If you want the shortest possible version:

  • Lab data tells you what a controlled test saw
  • Field data tells you what real users have been living with

You need both, but for different reasons.

The mistake to avoid

Do not use a nice Lighthouse run to talk yourself out of a field problem.

And do not use weak field data alone to guess at a fix without a reproducible lab path.

That is how teams end up changing the wrong thing, then wondering why the real result never moved.