Web Vitals: Metrics & Tooling

Web Vitals

Web Vitals is a set of ±10 metrics curated by an internal Google team that lets you evaluate and numerically quantify the performance of a web page. The value of this particular set of metrics and the reason it’s popular with frontend developers is that all those metrics have a direct impact on the user experience of the page users. This means that if we improve these metrics, we are objectively improving the experience of users who visit our site.

Of all the Web Vitals a subset of core metrics is selected. These core metrics are of primary importance in terms of their impact on the site’s usability and user experience:

  • Largest Contentful Paint (LCP) – evaluates the speed of page loading, how much time passed from the beginning of loading to rendering the largest (in size) element on it
  • First Input Delay (FID) – evaluates the interactivity of the page, how much time passes between the first user’s interaction with the page (button tap, scroll, etc.) and the call of the corresponding event handler in the browser
  • Cumulative Layout Shift (CLS) – assesses the visual stability of the page during loading, how much and how far from origin large elements move during page loading

Google also offers certain guidelines as to what values of these metrics can be considered good and bad, and updates them periodically.

Why The Metrics Are Important

Since these indicators affect the usability of the site, by improving them, we reduce the number of users who leave the site for various reasons. Here is an example of a bad user story:

Abstract Vova, taking a train ride through Vyshny Volochek station, decides to read the comments on his spending diary that was posted yesterday. He connects to the train’s wifi with his Nokia 3310, and opens the main page of Tinkoff Journal. Article covers load so slowly that you can say they don’t load at all, and Vova has to search for a post by title, fortunately it wasn’t published that long ago.

He finds the needed post and happily taps on the link, but at that moment the banner styles are loaded and the banner jumps under his thumb. Instead of the sought-after article Vova finds himself on the page of course purchase. Not wanting to repeat the process, the user leaves the site.

On the one hand, the site worked, and it can be used in the case of emergency. But since the performance metrics are not very good, it can be uncomfortable to use and visitors can simply leave.

Another important reason to keep these metrics in check is that Google uses them in site rankings, which it explicitly states. It is not disclosed how much they affect the position of pages in search, but in this circumstance, it is better to be safe than sorry.

Lighthouse

Lighthouse is an open-source tool that originated at Google, but eventually moved to open source. It allows users to audit web pages by collecting all web vitals metrics. Lighthouse’s advantage is that it aggregates these metrics and generates several digestible scores from 0 to 100:

  • Performance
  • Accessibility
  • Best Pratices
  • SEO
  • Progressive Web App

In addition to the scores themselves, Lighthouse shows the factors that led to these results, as well as in many cases ways to remedy the situation.

Lighthouse is integrated into a separate Google Chrome devtools tab, so you can try running it yourself on any site. A short version of the report is available through the web tester.

Perfectum

Perfectum is a set of tools from Tinkoff’s performance team that consists of several components:

  • The @perfectum/client library for getting metrics from real users
  • Library @perfectum/synthetic to perform Lighthouse audits in automatic mode
  • CLI interface @perfectum/cli to run synthetic audits from the command line

@perfectum/client

@perfectum/client plugs into client code and sends measurements results from user browsers to a given endpoint. Hence RUM = real user monitoring, the results are obtained from real users of the application. Example of library connection:

import Perfectum from '@perfectum/client'
import {ENV_NAME} from '../../../constants-env'

export function initClientMetricsMonitoring() {
  Perfectum.init({
    sendMetricsUrl:
      'https://endpoint.local/metrics',
    sendMetricsData: {
      group: 'tjournal',
      app: 'mercury-front',
      env: ENV_NAME === 'production' ? 'prod' : 'test',
    },
  })
}

@perfectum/synthetic

@perfectum/synthetic allows you to automate Lighthouse audits of any number of application pages and report generation. It works like this:

  1. Perfectum receives the audit configuration as input: the list of addresses for measurements, device slowdown settings, report format and location, etc.
  2. If the configuration contains commands to build and start the project, Perfectum executes them before starting the test and waits for the web server to start
  3. Perfectum launches Chrome in headless mode (without user interface), goes through the list of url-addresses passed to it and performs Lighthouse analysis on each of them. The analysis can be performed several times in order to avoid spoiling the results with any anomalous outliers
  4. HTML reports are saved in the location specified in the configuration

The resulting reports can be used both to track performance over time and to find optimization opportunities and evaluate implemented optimizations.

For the convenience of using this library from the terminal and in CI pipelines, a console tool @perfectum/cli was made.

@perfectum/cli (JSON config, CI)

To run Perfectum’s synthetic audit, you must install @perfectum/cli and use its audit command:

yarn global add @perfectum/cli
perfectum audit <options>

# or install locally
yarn add --dev @perfectum/cli
yarn exec perfectum audit -- <options>

In this case the configuration will be taken from the perfectum.json file in the working directory, and additional parameters (e.g. address list) can be passed in the command line arguments. Example JSON-config:

{
  "synthetic": {
    "urls": {
      "main": "https://journal.tinkoff.ru"
    },
    "numberOfAuditRuns": 3,
    "browserConfig": {
      "logLevel": "silent",
      "chromeFlags": ["--disable-dev-shm-usage"]
    },
    "auditConfig": {
      "mobile": {
        "settings": {
          "throttling": {
            "rttMs": 150,
            "throughputKbps": 1638,
            "cpuSlowdownMultiplier": 4
          }
        }
      },
      "desktop": {
        "settings": {
          "throttling": {
            "rttMs": 40,
            "throughputKbps": 10240,
            "cpuSlowdownMultiplier": 1
          }
        }
      }
    },
    "reporterConfig": {
      "reportPrefixName": "performance-report",
      "reportOutputPath": "./.performance-reports",
      "reportFormats": ["html"]
    },
    "clearReportFilesDirectoryBeforeAudit": true
  }
}

Let me explain some non-obvious settings:

  • "chromeFlags": ["--disable-dev-shm-usage"] – in the pipeline, the audit runs in a docker container, which has no access to shared memory via /dev/shm. You could pass it through from the host, but it’s easier to disallow using it in Chrome
  • auditConfig – these settings are passed to Lighthouse when the analysis starts. In our case, I pass the system slowdown settings used by Google in the official Lighthouse release
    • rttMs – round-trip time in milliseconds, the time it takes for a packet from the client to reach the server and return back. This setting allows emulation of different network states, being more or less loaded
    • throughputKbps – bandwidth of the emulated network, controls the maximum throughputYou can see the emulated device and network status at the end of the Lighthouse report

Synthetic Audit vs RUM

Synthetic tests with @perfectum/synthetic or Lighthouse and real user monitoring collected with e.g. @perfectum/client are supposed to maintain the same metrics. Each of these approaches has advantages that are not inherent to the other.

For example, real user monitoring allows us to get information from “field tests” and a ton of it. Thanks to it we can find out how performant our pages are for real users, what exactly we need to pay attention to in order to improve the experience for as many visitors to our site as possible.

At the same time, synthetic audits allow us to catch serious changes in performance as early as the development phase, which allows us to evaluate how new functionality affects the performance of the site as a whole and specific pages individually.

See Also