Core Web Vitals performance dashboard

INP First: A Core Web Vitals Checklist for 2026

Interaction to Next Paint replaced FID as the Core Web Vitals responsiveness metric. This checklist covers measurement, common root causes, and targeted fixes for production sites.

core-web-vitalsinpperformancelighthouse

Google replaced First Input Delay with Interaction to Next Paint in March 2024, and by 2026 every production site needs to treat INP as the primary responsiveness signal. This guide walks through a field-tested checklist for diagnosing INP problems, prioritising fixes, and validating improvements. It is written for front-end engineers and site operators who already have a working site and want to make it faster for real users.

If you are building on the web fundamentals path, INP is one of the three Core Web Vitals you need to understand alongside LCP and CLS. The web development hub links to additional reading on HTML, CSS, and JavaScript patterns that affect all three metrics.

You will leave this article with a ranked checklist, a diagnostic workflow, common failure modes, and a validation strategy you can run against your own site this week.

Why INP replaced FID

FID measured only the delay before the browser started processing the first interaction. It ignored how long the event handler ran and how long it took for the browser to paint the result. A page could score well on FID while feeling sluggish to every user who clicked a button.

INP measures the full lifecycle: input delay, processing time, and presentation delay. It captures the worst interaction (at the 98th percentile) across the entire page session. That makes it a much better proxy for how users actually experience responsiveness.

The INP-first checklist

Work through these items in order. Each one addresses a progressively deeper layer of the interaction pipeline.

1. Identify your worst interactions

Before optimising anything, measure what is actually slow. Use Chrome DevTools Performance panel, the Web Vitals JavaScript library, or your real-user monitoring tool to identify which interactions exceed the 200 ms "good" threshold.

  • Filter by interaction type: clicks, key presses, taps
  • Sort by duration descending
  • Note the page and element involved

2. Break up long tasks on the main thread

Any JavaScript task that runs longer than 50 ms blocks the main thread and delays interaction processing. The most common sources are:

  • Large framework re-renders triggered by state changes
  • Synchronous third-party scripts (analytics, chat widgets, ad tags)
  • Complex DOM manipulation after user input

Use scheduler.yield() or setTimeout chunking to break expensive work into smaller pieces. The goal is to keep every individual task under 50 ms so the browser can interleave input processing.

3. Defer non-critical work

Not every computation needs to happen in the event handler. Separate the visible response (updating the UI) from background work (analytics pings, prefetching, logging). Use requestIdleCallback or queueMicrotask for work that does not affect what the user sees.

4. Reduce layout thrashing

Reading a layout property (like offsetHeight) and then writing to the DOM forces the browser to recalculate layout synchronously. Batch your reads and writes. Use requestAnimationFrame to schedule DOM writes after all reads are complete.

5. Audit event handler complexity

Review your click, keydown, and pointer event handlers. Look for:

  • Handlers that trigger full-page re-renders
  • Handlers that call synchronous APIs (e.g., localStorage.getItem on every keystroke)
  • Handlers that attach to the document instead of the target element
  • Passive event listeners missing on scroll and touch handlers

6. Minimise rendering cost

After the event handler finishes, the browser must style, layout, paint, and composite the result. Reduce this cost by:

  • Limiting the number of DOM nodes affected by a state change
  • Using CSS contain to isolate repaint boundaries
  • Avoiding forced reflows inside animation loops
  • Keeping off-screen content out of the render tree with content-visibility: auto

Common failure modes

Failure mode 1: good lab scores, bad field scores. Lab tests typically measure a single interaction on a fresh page load. Real users interact dozens of times per session, and garbage collection, memory pressure, and accumulated DOM state make later interactions slower.

Failure mode 2: fixing the wrong interaction. Optimising a button that is already fast while ignoring a slow dropdown or modal opener is wasted effort. Always start with field data.

Failure mode 3: moving work to a Web Worker without measuring. Web Workers help with computation-heavy tasks, but the serialisation cost of postMessage can negate gains for small payloads. Benchmark before committing to the architecture change.

Failure mode 4: over-relying on code splitting. Lazy-loading a heavy module improves initial load but does not help if the module is needed during an interaction. The user still waits for the download, parse, and execute cycle.

Validation strategy

After making changes, validate in both lab and field:

  1. Lab: use Chrome DevTools โ†’ Performance โ†’ Interactions track to measure the specific interaction you changed. Confirm total duration is under 200 ms.
  2. Field: deploy and monitor for 7 days. Use CrUX, your RUM tool, or the Chrome UX Report API. Look at the 75th percentile (p75) INP value for the affected pages.
  3. Regression guard: set up a performance budget in your CI pipeline. Flag any PR that introduces a task longer than 50 ms in the critical interaction path.

The 200 ms "good" threshold is a ceiling, not a target. Aim for under 100 ms on critical interactions to leave headroom for device variability.

Trade-offs worth understanding

  • scheduler.yield() vs setTimeout(0): scheduler.yield() preserves task priority and resumes execution sooner. setTimeout always puts you at the back of the task queue. Prefer scheduler.yield() where supported.
  • Debouncing vs throttling input handlers: debouncing is better for search-as-you-type; throttling is better for scroll-linked updates. Neither is a substitute for making the handler itself fast.
  • Server-side rendering and INP: SSR improves LCP but can hurt INP if the hydration step attaches heavy event handlers after a long delay. Partial hydration or islands architecture can help.

Further reading on EBooks-Space