Frontend Masters Boost RSS Feed https://frontendmasters.com/blog Helping Your Journey to Senior Developer Wed, 10 Apr 2024 14:31:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 225069128 Understanding Interaction to Next Paint (INP) https://frontendmasters.com/blog/understanding-inp/ https://frontendmasters.com/blog/understanding-inp/#respond Tue, 09 Apr 2024 14:38:19 +0000 https://frontendmasters.com/blog/?p=1604 As of March 12th 2023, Interaction to Next Paint (INP) replaces First Input Delay (FID) as a Core Web Vital metric.

FID and INP are measuring the same situation in the browser: how clunky does it feel when a user interacts with an element on the page? The good news for the web—and its users—is that INP provides a much better representation of real-world performance by taking every part of the interaction and rendered response into account.

It’s also good news for you: the steps you’ve already taken to ensure a good score for FID will get you part of the way to a solid INP. Of course, no number—no matter how soothingly green or alarmingly red it may be—can be of any particular use without knowing exactly where they’re coming from. In fact, the best way to understand the replacement is to better understand what was replaced. As is the case with so many aspects of front-end performance, the key is knowing how JavaScript makes use of the main thread. As you might imagine, every browser manages and optimizes tasks a little differently, so this article is going to oversimplify a few concepts—but make no mistake, the more deeply you’re able to understand JavaScript’s Event Loop, the better equipped you’ll be for handling all manner of front-end performance work.

The Main Thread

You might have heard JavaScript described as “single-threaded” in the past, and while that’s not strictly true since the advent of Web Workers, it’s still a useful way to describe JavaScript’s synchronous execution model. Within a given “realm”—like an iframe, browser tab, or web worker—only one task can be executed at a time. In the context of a browser tab, this sequential execution is called the main thread, and it’s shared with other browser tasks—like parsing HTML, some CSS animations, and some aspects of rendering and re-rendering parts of the page.

JavaScript manages “execution contexts”—the code currently being executed by the main thread—using a data structure called the “call stack” (or just “the stack”). When a script starts up, the JavaScript interpreter creates a “global context” to execute the main body of the code—any code that exists outside of a JavaScript function. That global context is pushed to the call stack, where it gets executed.

When the interpreter encounters a function call during the execution of the global context, it pauses the global execution context, creates a “function context” (sometimes “local context”) for that function call, pushes it onto the top of the stack, and executes the function. If that function call contains a function call, a new function context is created for that, pushed to the top of the stack, and executed right away. The highest context in the stack is always the current one being executed, and when it concludes, it gets popped off the stack so the next highest execution context can resume—“last in, first out.” Eventually execution ends up back down at the global context, and either another function call is encountered and execution works its way up and back down through that and any functions that call contains, one at a time, or the global context concludes and the call stack sits empty.

Now, “execute each function in the order they’re encountered, one at a time” were the entire story, a function that performs any kind of asynchronous task—say, fetching data from a server or firing an event handler’s callback function—would be a performance disaster. That function execution context would either end up blocking execution until the asynchronous task completes and that task’s callback function kicked off, or suddenly interrupting whatever function context the call stack happened to be working through when that task completed. So alongside the stack, JavaScript makes use of an event-driven “concurrency model” made up of the “event loop” and “callback queue” (or “message queue”).

When an asynchronous task is completed and its callback function is called, the function context for that callback function is placed in a callback queue instead of at the top of the call stack—it doesn’t take over execution immediately. Sitting between the callback queue and the call stack is the event loop, which is constantly polling for both the presence of function execution contexts in the callback queue and room for it in the call stack. If there’s a function execution context waiting in a callback queue and the event loop determines that the call stack is sitting empty, that function execution context is pushed to the call stack and executed as though it were just called synchronously.

So, for example, say we have a script that uses an old-fashioned setTimeout to log something to the console after 500 milliseconds:

setTimeout( function myCallback() {
    console.log( "Done." );
}, 500 );

// Output: Done.

First, a global context is created for the body of the script and executed. The global execution context calls the setTimeout method, so a function context for setTimeout is created at the top of the call stack, and is executed—so the timer starts ticking. The myCallback function isn’t added to the stack, however, since it hasn’t been called yet. Since there’s nothing else for the setTimeout to do, it gets popped off the stack, and the global execution context resumes. There’s nothing else to do in the global context, so it pops off the stack, which is now empty.

Now, at any point during this sequence of events our timer will elapse, calling myCallback. At that point, the callback function is added to a callback queue instead of being added to the stack and interrupting whatever else was being executed. Once the call stack is empty, the event loop pushes the execution context for myCallback to the stack to be executed. In this case, the main thread is done working long before the timer elapses, and our callback function is added to the empty call stack right away:

const rightNow = performance.now();

setTimeout( () => {
    console.log( `The callback function was executed after ${ performance.now() - rightNow } milliseconds.` );
}, 500);

// Output: The callback function was executed after 501.7000000476837 milliseconds.

Without anything else to do on the main thread our callback fires on time, give or take a millisecond or two. But a complex JavaScript application could have tens of thousands of function contexts to power through before reaching the end of the global execution context—and as fast as browsers are, these things take time. So, let’s fake an overcrowded main thread by keeping the global execution context busy with a while loop that counts to a brisk five hundred million—a long task.

const rightNow = performance.now();
let i = 0;

setTimeout( function myCallback() {
  console.log( `The callback function was executed after ${ performance.now() - rightNow } milliseconds.`);
}, 500);

while( i < 500000000 ) {
  i++;
}
// Output: The callback function was executed after 1119.5999999996275 milliseconds.

Once again, a global execution context is created and executed. A few lines in, it calls the setTimeout method, so a function execution context for the setTimeout is created at the top of the call stack, and the timer starts ticking. The execution context for the setTimeout is completed and popped off the stack, the global execution context resumes, and our while loop starts counting.

Meanwhile, our 500ms timer elapses, and myCallback is added to the callback queue—but this time the call stack isn’t empty when it happens, and the event loop has to wait out the rest of the global execution context before it can move myCallback over to the stack. Compared to the complex processing required to handle an entire client-rendered web page, “counting to a pretty high number” isn’t exactly the heaviest lift for a modern browser running on a modern laptop, but we still see a huge difference in the result: in my case, it took more than twice as long as expected for the output to show up.

Now, we’ve been using setTimeout for the sake of predictability, but event handlers work the same way: when the JavaScript interpreter encounters an event handler in either the global or a function context, the event becomes bound, but the callback function associated with that event listener isn’t added to the call stack because that callback function hasn’t been called yet—not until the event fires. Once the event does fire, that callback function is added to the callback queue, just like our timer running out. So what happens if an event callback kicks in, say, while the main thread is bogged down with long tasks buried in the megabytes’ worth of function calls required to get a JavaScript-heavy page up and running? The same thing we saw when our setTimeout elapsed: a big delay.

If a user clicks on this button element right away, the callback function’s execution context is created and added to the callback queue, but it can’t get moved to the stack until there’s room for it in the stack. A few hundred milliseconds may not seem like much on paper, but any delay between a user interaction and the result of that interaction can make a huge difference in perceived performance—ask anyone that played too much Nintendo as a kid. That’s First Input Delay: a measurement of the delay between the first point where a user could trigger an event handler, and the first opportunity where that event handler’s callback function could be called, as the main thread has become idle. A page bogged down by parsing and executing tons of JavaScript just to get rendered and functional won’t have room in the call stack for event handler callbacks to get queued up right away, meaning a longer delay between a user interaction and the callback function being invoked, and what feels like a slow, laggy page.

That was First Input Delay—an important metric for sure, but it wasn’t telling the whole story in terms of how a user experiences a page.

What is Interaction to Next Paint?

There’s no question that a long delay between an event and the execution of that event handler’s callback function is bad, sure—but in real-world terms, “an opportunity for a callback function’s execution context to be moved to the call stack” isn’t exactly the result a user is looking for when they click on a button. What really matters is the delay between the interaction and the visible result of that interaction.

That’s what Interaction to Next Paint sets out to measure: the delay between a user interaction and the browser’s next paint—the earliest opportunity to present the user with visual feedback on the results of the interaction. Of all the interactions measured during a user’s time on a page, the one with the worst interaction latency is presented as the INP score—after all, when it comes to tracking down and remediating performance issues, we’re better off working with the bad news first.

All told, there are three parts to an interaction, and all of those parts affect a page’s INP: input delay, processing time, and presentation delay.

Chart explaining the three parts of an interaction: Input Delay, Processing Time, and Presentation Delay. 

A long task blocks input delay, then there is processing time (longest bar) and presentation delay, then the Next Paint happens.

Input Delay

How long does it take for our event handlers’ callback functions to find their way from the callback queue to the main thread?

You know all about this one, now—it’s the same metric FID once captured. INP goes a lot further than FID did, though: while FID was only based on a user’s first interaction, INP considers all of a user’s interactions for the duration of their time on the page, in an effort to present a more accurate picture of a page’s total responsiveness. INP tracks any clicks, taps, and key presses on hardware or on-screen keyboards—the interactions most likely to prompt a visible change in the page.

Processing Time

How long does it take for the callback function associated with the event to run its course?

Even if an event handler’s callback function kicks off right away, that callback will be calling functions that call more functions, filling up the call stack and competing with any other work taking place on the main thread.

const myButton = document.querySelector( "button" );
const rightNow = performance.now();

myButton.addEventListener( "click", () => {
    let i = 0;
    console.log( `The button was clicked ${ performance.now() - rightNow } milliseconds after the page loaded.` );
    while( i < 500000000 ) {
        i++;
    }
    console.log( `The callback function was completed ${ performance.now() - rightNow } milliseconds after the page loaded.` );
});

// Output: The button was clicked 615.2000000001863 milliseconds after the page loaded.
// Output: The callback function was completed 927.1000000000931 milliseconds after the page loaded.

Assuming there’s nothing else bogging down the main thread and preventing this event handler’s callback function, this click handler would have a great score for FID—but the callback function itself contains a huge, slow task, and could take a long time to run its course and present the user with a result. A slow user experience, inaccurately summed up by a cheerful green result.

Unlike FID, INP factors in these delays as well. User interactions trigger multiple events—for example, a keyboard interaction will trigger keydown, keyup, and keypress events. For any given interaction, INP will capture a result for the event with the longest “interaction latency”—the delay between the user’s interaction and the rendered response.

Presentation Delay

How quickly can rendering and compositing work take place on the main thread?

Remember that the main thread doesn’t just process our JavaScript, it also handles rendering. The time spent processing all the tasks created by the event handler are now competing with any number of other processes for the main thread, all of which is now competing the layout and style calculations needed to paint the results.

Testing Interaction to Next Paint

Now that you have a better sense what INP is measuring, it’s time to start gathering data out in the field and tinkering in the lab.

For any websites included in the Chrome User Experience Report dataset, PageSpeed Insights is a great place to start getting a sense of your pages’ INP. Your best bet for gathering real-world data from across a unknowable range of connection speeds, device capabilities, and user behaviors is likely to be the Chrome team’s web-vitals JavaScript library (or a performance-focused third-party user monitoring service).

Screenshot of PageSpeed Insights showing a test for frontendmasters.com, showing off all the metrics like LCP, INP, CLS, etc. All Core Web Vitals are "green" / "passed"

Then, once you’ve gained a sense of your pages’ biggest INP offenders from your field testing, the Web Vitals Chrome Extension will allow you to test, tinker, and retest interactions in your browser—not as representative as field data, but vital for getting a handle on any thorny timing issues that turned up in your field testing.

Screenshot of output of Web Vital Chrome Extension tester for Boost showing Largest Contentful Pain, Cumulative Layout Shift, etc.

Optimizing Interaction to Next Paint

Now that you have a better sense of how INP works behind the scenes and you’re able to track down your pages’ biggest INP offenders, it’s time to start getting things in order. In theory, INP is a simple enough thing to optimize: get rid of those long tasks and avoid overwhelming the browser with complex layout re-calculations.

Unfortunately, a simple concept doesn’t translate to any quick, easy tricks in practice. Like most front-end performance work, optimizing Interaction to Next Paint is a game of inches—testing, tinkering, re-testing, and gradually nudging your pages toward something smaller, faster, and more respectful of your users’ time and patience.

]]>
https://frontendmasters.com/blog/understanding-inp/feed/ 0 1604
Capo.js: A five minute web performance boost https://frontendmasters.com/blog/capo-js-a-five-minute-web-performance-boost/ https://frontendmasters.com/blog/capo-js-a-five-minute-web-performance-boost/#comments Fri, 01 Mar 2024 16:25:08 +0000 https://frontendmasters.com/blog/?p=1086 You want a quick web performance win at work that’s sure to get you a promotion? Want it to only take five minutes? Then I got you.

Screenshot of the Capo.js console output showing rows of colored rectangles for the Actual order and Sorted order of elements in the head.

Capo.js is a tool to get your <head> in order. It’s based on some research by Harry Roberts that shows how something seemingly insignificant as the elements in your <head> tag can make your page load up to 7 seconds slower! From pragma directives, to async scripts, to stylesheets, to open graph tags, it’s easy to mess up and can have consequences. Capo.js will show you the specific order of elements to make your <head> and your page a little (or a lotta) bit faster.

Usage

  1. Head over to Capo.js homepage
  2. Install the Capo.js Chrome Extension (you can also use it as a DevTools Snippet or bookmarklet)
  3. Run Capo.js

Capo.js will log two colored bar charts in your JS console; your “Actual” <head> order and a “Sorted” <head> order. You can expand each chart to see more details. If you see a big gray bar in the middle of your “Actual” bar chart, then you’re leaving some quick wins on the table. The “Sorted” dropdown will show you the corrected order and even give you the code. But in the real world you probably need to futz with a layout template or your _header.php to get it reorganized.

Installing Capo.js takes about a minute, rearranging your <head> takes another minute. Honestly the longest part is making the Pull Request.

EDITOR INTERVENTION

[Chris busts through the door.]

OK fine Dave, I’ll give it a shot right here on Boost itself.

I installed the Chrome Extension and ran it and got this little popup:

"Before" sort order, scattered rectangles of various colors

At first I was a little confused, like this was some fancy code that Web Perf people immediately understand but I was out of the loop on. But actually it’s just a visualization of the order of things (top: actual, bottom: ideal). As a little UX feedback, it should say “Open your console for more information” because that’s where all the useful stuff is.

I found it most useful to look at the “Sorted” output (which is what you should be doing) and then try to get my source code to match that. I think I generally did OK:

"After" sort order, scattered rectangles of various colors, slightly less scattered than the previous image

I wasn’t able to get it perfect because of WordPress. A decent chunk of what goes into your <head> in WordPress comes from the output of the <php wp_head(); ?> function. I’m sure it’s technically possible to re-order output in there, but that was a more effort that I felt was worth it right at this minute.

Take your wins, that’s what I always say.

]]>
https://frontendmasters.com/blog/capo-js-a-five-minute-web-performance-boost/feed/ 7 1086
Real-World Usage of content-visibility https://frontendmasters.com/blog/real-world-usage-of-content-visibility/ https://frontendmasters.com/blog/real-world-usage-of-content-visibility/#respond Wed, 21 Feb 2024 19:57:41 +0000 https://frontendmasters.com/blog/?p=991 Jeremey Keith uses the little-used CSS property content-visibility to improve the performance on a fairly heavy page.

It works a treat. I did a before-and-after check with pagespeed insights on the page for Out On The Ocean. The “style and layout” part of the main thread work went down considerably. Total blocking time went from more than 600 milliseconds to less than 400 milliseconds.

Not a bad result for a little bit of CSS!

That’s what I’d like to see in more guest writing around here: explanations of real-world implementations to make websites better.

]]>
https://frontendmasters.com/blog/real-world-usage-of-content-visibility/feed/ 0 991
Why do reflows negatively affect performance? https://frontendmasters.com/blog/why-do-reflows-negatively-affect-performance/ https://frontendmasters.com/blog/why-do-reflows-negatively-affect-performance/#comments Fri, 19 Jan 2024 20:08:08 +0000 https://frontendmasters.com/blog/?p=525 Layout recalculations, or “reflows”, happen when we change a layout-related property, such as an element’s width, height, or margin. Reflows can happen accidentally or on purpose.  

For instance, you might want to have a feature that switches from a grid view to a list view.  In that case, triggering a reflow is essential for functionality. 

A less ideal situation might be ads that are added to the page from potentially slow-loading JavaScript. That JavaScript is likely to resize elements with the injected ads, causing a reflow on every page load during a user’s visit. This type of reflow can negatively impact the user experience in several ways, like interrupting a reading flow and delaying other interactions.


When we change a layout-related property, we trigger the pixel pipeline. This is a series of steps that the browser’s rendering engine performs to show the actual changes on the screen.

The pixel pipeline involves several steps:

  1. Layout (Reflow): The browser recalculates the geometry of elements affected by layout-related property changes. This calculation determines the positioning of elements in the browser window.
  2. Paint: Following layout recalculations, the browser renders the visual features of each element, like colors, borders, and shadows.
  3. Composite: The browser then combines the painted layers to produce the final image displayed on the screen.

Reflows can be heavy on the CPU as they involve complex algorithms for processing CSS rules and determining element alignment and constraints within the page’s layout. 


The CPU can be seen as the “brain” of the computer, and it’s able to process instructions in a sequence, or a single thread. The issue with reflows arises from a few factors: 

  • The CPU has to interpret and execute CSS rules to determine the geometry of elements on the page. This can be a heavy operation requiring complex computations and logic assessments.
  • The CPU can handle multiple tasks but can only focus on one task at a time. During a reflow, other important operations like user input handling and script execution may be deferred, which can result in delays.
  • Since pages are often deeply nested, a single layout change can cause a cascade of recalculations and updates, putting additional burden on the CPU.
  • Though more low-level, the CPU must often access and alter memory for computations and updating the render tree and paint. Inefficient memory access might result in cache misses and slower memory retrievals, further slowing down the rendering process.

So, why do reflows negatively affect performance? 

Reflows demand substantial CPU resources to run computations and update the render tree. This process reduces overall speed and user experience. 

As the CPU can become overwhelmed with these tasks, its ability to handle other tasks like user interaction and script execution decreases, which results in slower rendering and reduced responsiveness. 

The inability to handle other tasks can result in noticeable delays, layout shifts, and unresponsive pages, which directly affect how users interact with the site. Such issues are detrimental to user experience and impact key metrics like Cumulative Layout Shift (CLS), one of Google’s Core Web Vitals which are part of your site’s Lighthouse scores as well. CLS can even influence your site’s SEO performance.


Did you know all that? What else do you know about Advanced Web Development? Wanna test yourself? Take Lydia’s Advanced Web Development Quiz. The 10th question is all about the cost of rendering and might be trickier than you think.

]]>
https://frontendmasters.com/blog/why-do-reflows-negatively-affect-performance/feed/ 1 525
Million.js 3.0 https://frontendmasters.com/blog/million-js-3-0/ https://frontendmasters.com/blog/million-js-3-0/#comments Mon, 15 Jan 2024 21:42:29 +0000 https://frontendmasters.com/blog/?p=509 Million.js caught my eye a few months back because of the big claim it makes: Make React 70% faster. I ended up listening to a podcast with the creator, and the meat of it is: it removes the need for “diffing” the virtual DOM that React uses when re-rendering to find what needs to change, which can be slow. I see the project still has momentum, now reaching 3.0.

Skeptical? Good — it’s your job to be skeptical. If this is so amazing, why doesn’t React itself do it? Potential answer: it requires a compiler. That’s a pretty big directional shift for React and I could see them never wanting to go down that road. Although I say that but I’m even more surprised that React will have server requirements (presumably, with server components, right?) And do I actually need this? How complex does my project need to be before I can actually feel React being slow in diffing? What is my technical debt here? How much of my code base has to change to accommodate this? What is this project dies out, where does that leave me? Is there any entirely un-biased endorsements or critical reviews out there to find?

I can’t answer all this for you. I just bring it up because it’s my goal with Boost to get you thinking like you need to think to become a senior developer, and this is part of how.

]]>
https://frontendmasters.com/blog/million-js-3-0/feed/ 3 509