Top 10 Techniques for JavaScript Performance Optimization
When it comes to optimizing your JavaScript code, there are countless tools and techniques to implement and there are plenty of articles out there that can offer valuable tips and tricks. However, many of them don’t provide very much context for implementation of these approaches. In this article, we will not only cover the most common techniques that we found but also provide examples and break down just how they can be implemented in your applications. Here are the top 10 techniques that we found to optimize your JavaScript code.
- Minimize DOM manipulation
- Use efficient looping patterns
- Optimize object and array access
- Leverage browser caching
- Use throttling and debouncing for event handling
- Leverage asynchronous code
- Optimize network requests (lazy loading)
- Defer or delay non-critical JavaScript
- Minify and bundle JavaScript files
- Remove unused dependencies
1. Minimize DOM manipulation
If you’re unfamiliar with the phrase “DOM manipulation” or you’ve heard it but are still unsure of what it actually is, take a look at this simple example. The DOM (Document Object Model) is a programming interface that represents a web page as a tree of objects.
Each element (as seen in our HTML file below) is an object (node). When we modify the content or behavior of these objects in our tree (the DOM), we are practicing DOM manipulation.
Below is an html file as represented in the DOM tree above. This document displays the text “Hello, World” and a button with the value “Change Text”
In our JavaScript file below, the “getElementById” method in our document object accesses the element in our HTML with the id of “greeting” (h1 element above) and assigns it to “greetingByElement”. We then do the same on the next line for “changeTextBtn” (our button element). We then add an “event listener” to our button so that when it’s clicked, our h1 element will display “Hello, JavaScript!”.
Manipulating the DOM too frequently can be costly because every time the DOM is changed, the browser may need to recalculate the styles (revalue) and redraw parts of the page (repaint). By minimizing DOM manipulations or batching them together, you can reduce the number of reflows and repaints, resulting in smoother performance. Document “fragments” can be used to hold DOM elements temporarily in memory, before appending them to the document in a single operation. Think of it as saving up all of the changes you’d like to make and then applying them in one fell swoop. Below, we use a “for loop” to iterate through some data and create 100 new div elements with some content in them. We then append each of those divs to a “fragment”. Once the loop is complete, the “fragment” containing all of our new divs is appended to our DOM. Now the DOM re-renders once with all of our new divs instead of 100 times.
Using methods like “requestAnimationFrame”, “setTimeout” ,“setInterval” and utilizing the “innerHTML” element may also help minimize the number of DOM reflows and repaints by batching updates. Libraries like React are worth considering when building out your front end as they abstract away the DOM update process and optimize when and how updates are made by using a "virtual DOM”.
2. Use efficient looping patterns
JavaScript offers several ways to iterate through arrays and other iterable objects. For example, “forEach" is a higher order function that will iterate over an array and execute a function on each element within the array. Below, this method will iterate through “array” and log each element to the console.
While methods like “forEach", “map" or “reduce" can be useful and even fun to implement in your code, they are not always the most efficient option- especially when iterating through a lot of data. Often, a classic “for loop” or “for…of loop” is a better choice. This is because these methods iterate through the entire data set and do not allow for early exits (break or return) as a “for loop” would. Below, this “for loop” will iterate through “array” and log each element to the console. If the element is even, we exit this loop with “return”. Exiting a loop like this would not be possible using “forEach”.
Below, we see a “for…of” loop. This would be used when you want to iterate through an array but do not need access to each element by its index.
A “for…in” loop is designed for iteration over keys (property names) in objects. These loops can technically iterate over arrays but it’s generally not recommended because they will also iterate over any custom properties or methods that are added to an array as well as properties added to its prototypal chain.
Using a “for loop” in your JavaScript performance optimization can often be the best option as it’s faster and can exit early when needed. While using these loops, it’s important to use break or return when you can and to be careful of nested loops as they become expensive for larger datasets. Higher-order functions like “map”, “filter”, and “reduce" might be the right choice for readability but may introduce unnecessary computational costs. Making informed decisions regarding array and object iteration ensures that your code performs well, even with large data sets.
3. Optimize object and array access
Optimizing how you retrieve the information stored in your data structures, prevents your JavaScript from doing extra work. Sure, looping is often necessary, but grabbing your data directly instead of iterating through can significantly speed up your code. Objects and arrays are containers for related information and the better you get at pulling out the bits that you need, the smoother and snappier your applications will be.
Accessing data in objects or arrays repeatedly can often be an area where your code is working too hard. If you need to access the same property or element multiple times in a loop, it can be more efficient to store it in a variable. Similarly, accessing the array’s length property inside of a loop is a common practice, but by doing this, the length is recalculated on every iteration. By caching both the data and the array length, you can make your loops cleaner and faster.
In the image above, both items[i] and items.length are accessed repeatedly in the inefficient example. Caching these values into variables (here we used “item” and “len”) reduces repetitive lookups and speeds up the loop. This is especially helpful when working with large datasets.
Storing data in an object instead of an array can also be a great way to optimize your JavaScript performance. Iterating through an array (especially a large one) can take up a lot of extra time.
On the left, our code iterates through the users object and checks if the current user’s “id” matches 2. Instead, it’s far more efficient to access the user with the “id” of 2 as seen on the right. Our example on the left is using linear O(n) time complexity as the time it will take to complete depends on the length of the array. On the right, we observe constant O(1) time complexity as we are not iterating but, instead are accessing the property directly.
Deeply nested objects can be a lot like those Russian dolls where you have to open up each one, layer by layer to reach the doll at the core. Similarly, accessing properties within nested objects can require a lot of effort from your Javascript. To take some of that load off, we can flatten our structure.
The example above declares a recursive function to flatten our deeply nested object so that all of the properties are accessible as top-level keys. If this code looks a bit intimidating, don’t worry! All you need to know for now is that, once our object is flattened, we can access the properties directly as seen below.
Whenever you can, go for direct property access instead of using dynamic keys. Dynamic keys (like object[key]) can be useful when you need to figure out the key at runtime, but the browser has to calculate the key before grabbing the value. Directly accessing a property (ie: object.property) can be a better option for optimizing speed. Dynamic keys are still super useful when iterating over an object’s keys or dealing with property names that aren’t set in stone. In those cases, having the flexibility of dynamic access is more important than direct access.
Directly accessing a property–as shown above– can also simplify the debugging process, as the property names are stated explicitly rather than hidden in variables.
If you’re looking for another way to optimize your Javascript in frequent look-up situations or unique collections:
- Maps allow keys to be of any type (even objects or arrays) and maintain the order in which keys are inserted.
- Sets store unique values of any type and ensure that there are no duplicates. The Set object has methods like “add” and “has”, which perform in constant time O(1).
By caching lookups, minimizing your redundant operations, simplifying structures, and choosing the right tools like Maps and Sets, you can handle even large datasets with ease.
4. Leverage browser caching
Http caching (storing frequently accessed data locally) is a fantastic way to optimize your JavaScript code as it reduces server requests and load times. Think of it like checking the fridge for a snack before driving all the way to the store to buy it. Let’s say you have an image that needs to be displayed on a user’s web browser (the client) whenever they load your homepage. The first time the user loads the page, a request is made to your server and your server responds with the image which then loads to the screen. This request and response take time and caching the image file is a great way to speed this process up as it allows the client to first check a cache for the image. If it finds the image there and loads it to the screen without making a request to the server. If the image is not there, or maybe has expired, the request for the image is sent to the server.
During the development process of a server, headers can be set on responses to instruct the client on how to handle caching the response. Common caching headers are “cache-control”, “expires", “ETag”, “last modified”, and “vary”. Today, “cache-control” is used more frequently than other headers. Here’s what you need to know:
- The “cache-control” header specifies “caching directives”.
- “caching directives” tell a browser, proxy server or cache-server which resources (files) are cached and how.
- Common directives are:
- “max-age” - sets how long to cache a file
- “no-store” - instructs client not to cache the file
- “must-revalidate” - tells client that it must make a new request for the file from the server each time it wants to load it
In this example, we:
- define a route for handling GET requests to /api/data and set our caching headers on the response within our middleware.
- set the “Cache-Control” header to allow the resource to be cached publicly for one hour (‘public, max-age=3600’).
- use the “Last-Modified” header to mark the response with the current date
- include an ETag header with a unique identifier ('12345').
The use of http caching is a bit nuanced and may seem like an intimidating and complicated practice but, once you get the hang of it, it can be a fantastic way to speed up your application and optimize your JavaScript code.
5. Use throttling and debouncing for event handling
Working with throttling and debouncing may also seem a bit daunting if you’re new to software development but you probably benefit from the implementation of these two performance optimization techniques every day.
Throttling is a technique that ensures an operation triggered by an event (such as a button click) will occur once during a specified time period, regardless of how many times it is triggered in quick succession.
Imagine you have a scroll event listener, and you want to execute a function only once every 500 milliseconds while the user is scrolling (in this case our function is simply logging ‘function executed!’ to the console.
In the example above, we define a variable, “lastExecution" to track the last time a function was invoked. In the “throttledFunction”, we compare the current time (labelled “now”) with the last invocation time. If 500 milliseconds have passed, the function logs "Function executed!" to the console and updates “lastExecution” to the current time. Finally, we add an event listener to the window object (the browser). This tells the browser “if you detect any scrolling, execute ‘throttledFunction’. ”
Without throttling, events triggered by scrolling, resizing, clicking, etc. could be called hundreds or even thousands of times in a very short period which can lead to slow rendering or high volumes of requests.
Debouncing is used to execute a function after a certain amount of time since the last time the event was triggered. In other words, the function is “delayed” until the event stops firing for a certain amount of time. You may have noticed this functionality each time you use a search bar on a website- you type a few characters and, if you stop typing for a second or two, the search bar tries to make a prediction of what you might be searching for.
Debouncing can be used in the following way:
- Each time the event (in this case the user typing) is triggered, a timer is started.
- If the event keeps firing (the user keeps typing), the function call is delayed until the event stops for the specified amount of time.
- Once the event stops (the user stops typing for a specified time), the function is executed. In this case, the function being executed after the specified time is a fetch request to an API to try and complete the user’s input. Instead of making a request after every keystroke, we wait until the user stops typing for a moment, then make a request.
This is great for optimizing JavaScript performance as it reduces unnecessary function calls.
We take a more in depth look into these two techniques in our article Your Guide to Debouncing in JavaScript so, as you’re expanding your understanding of them, be sure to check it out. Once you get a bit more comfortable with them, they can be used to optimize your app by controlling how many events are triggered and how often.
6. Leverage asynchronous code
When it comes to JavaScript performance optimization, turning the execution of long operations (like fetching data from an API or loading large datasets) into asynchronous code is a must. Remember, JavaScript is single threaded (executes one line of code at a time). If you have a line of synchronous code that takes a while to complete, it will block your main thread. This can cause your entire user interface to become unresponsive and, if a user interface is unresponsive for even three seconds, users may disengage or leave. In the example below, the thread of execution will not move on to the console.log until the “for loop” completes.
Asynchronous code allows the rest of your code to run while longer operations are being executed. Think of it like this- you’re cleaning your house and you get hungry. You decide to order a pizza so you can enjoy a delicious meal in your freshly cleaned living room. You wouldn’t wait until the pizza was delivered until you carried on with your chores. Instead, you would order your food, continue cleaning and enjoy your pizza once it’s delivered.
In order to optimize your JavaScript code, it’s always a good idea to use asynchronous code for tasks like:
- Fetching data from an API
- Reading or writing files (in Node.js)
- Waiting for a user input
- Delaying execution (like in animations or transitions)
Let’s take a look at some common ways to handle asynchronous operations in your code.
- setTimeout and setInterval are simple ways to schedule asynchronous tasks. setTimeout will delay the invocation of a given function until a specified amount of time has passed whereas setInterval will repeat the invocation of a given function at a certain interval. setTimeout takes two parameters: the function to be executed and the wait time before executing said function. Below, setTimeout() ensures that the message "This runs after 2 seconds" does not block the execution of logging “End”.
setInterval will continue running a function at a given interval. Like setTimeout, it takes two parameters: the function to be run and the amount of time to let pass between invocations. Take a peek at Codesmith’s Asynchronous JavaScript unit in CSX if you want to learn more about these methods.
- Leveraging the Promise object is essential in working with asynchronous code. A Promise object is used to store the eventual return from an asynchronous operation (like a fetch request). A Promise takes two parameters: one for a successful return (“resolve”) and one for an unsuccessful return (“reject”). These parameters are functions that either resolve the Promise with a value or reject it with an error.
The “fetchData” function is designed to do the following:
- fetch data from a specified URL and return a new Promise.
- Inside the Promise, the “fetch” function is called to make an asynchronous HTTP request to the passed in URL.
- The first .then() method checks if the response is truthy or falsey. If falsey, it rejects the Promise with an error message. If the request is successful, the response is parsed as JSON
- Invoking response.json() returns a new Promise because parsing is asynchronous.
- The second .then() method processes the parsed data and passes it to the resolve function which fulfills the Promise.
- Finally, if any errors have occurred throughout the process, the catch() method will invoke the reject function and pass in the error.
- If all of that seemed a bit confusing, you’re in luck! Using async await as “syntactic sugar” (syntax to make things easier to understand) for Promises makes your code much more readable. Above, we used .then() and .catch() to handle our asynchronous fetch request and JSON parsing. Many engineers find .then() and .catch() to be intuitive- “do this asynchronous function THEN do this with the response THEN do that with the data and, if there’s an error CATCH it and do this with the error”. For others, async await is the preferred syntax. Here’s a basic example.
- We define an asynchronous function “fetchUserData” using the “async” keyword. This indicates that it contains asynchronous operations.
- Inside the function, we use a try block to handle the functionality. The await keyword is used to pause the thread of execution until the fetch request completes.
- Once the response is received, we use “await” again to pause the execution until the response is parsed as JSON (also asynchronous).
- After the data is successfully parsed, it is logged to the console.
- If any errors occur during the fetch request or JSON parsing, the catch block will handle the error by logging it to the console.
Asynchronous code can feel a bit tricky. But when it comes to JavaScript performance optimization, this technique is essential in offloading long running tasks from the main thread keeping your app responsive and performant. For more information on the inner workings of asynchronous code, check out Codesmith’s “Hard Parts” lecture on Async and Promises.
7. Optimize network requests with lazy loading
Network requests are essential to most web applications. Whenever you visit a website, the browser sends a network request to a server to ask for the files needed to display the webpage (scripts, images, stylesheets, etc.). The requests travel over the internet, and the server responds by sending the requested resources (or an error!) back to the browser.
For applications that only render small pages with limited data, the process can happen quickly. But for data heavy applications that need to load up lots of images or third-party scripts (ads, embedded video content, etc.) these requests can accumulate, slowing down load times. In this case, we can implement “lazy loading” to delay the loading of specified resources until they’re actually needed, which makes this an excellent tool for Javascript performance optimization.
Images are one of the more common use cases for lazy loading as they can have larger file sizes which takes them longer to process. In vanilla Javascript, you can use an API called “IntersectionObserver” to load your images only as they scroll into the browser’s viewport.
In this example HTML, each image is set with a lightweight placeholder image, and a “data-src” attribute to hold the real image source.
- We tell our “IntersectionObserver” to “observe” the images (“lazyImages”) with our data-src attributes and placeholders.
- When those placeholder images come into the browser’s viewport, we take the real image file from our data-src attribute, and place it in a src attribute to render. Then we stop “observer” from observing the loaded image. Note: in Javascript, we use “dataset.src” to access the data-src HTML attribute.
- Offscreen images will only load the lightweight placeholders, and as you scroll them into view, the real image is loaded!
If Javascript performance optimization is your priority, Lazy loading can be tremendously helpful. When you only load the resources you need, you improve your initial load times because fewer resources are required upfront. This improves overall user experience because your application will be more responsive and speedy, especially on slower networks.
8. Defer or delay non-critical JavaScript
While lazy loading delays resources like images or infinite-scroll data (social media feeds for example) until they need to be displayed, deferring or delaying JavaScript helps us prioritize scripts based on when we need them to run.
By default, the browser parses your HTML file line by line from top to bottom. When it encounters a <script> tag, it pauses parsing to download and execute the script. This means, if all your script tags are at the top of the file, the browser can get stuck processing them, leaving users staring at a blank screen while critical elements take longer to load.
To avoid this, HTML provides two helpful attributes to control script loading: defer (convenient name!) and async. These attributes tell the browser how to handle our JavaScript files so that the rendering of the application won’t be unnecessarily blocked.
- defer loads the script in the background and executes it after the HTML has been completely parsed.
- async also loads the script in the background, but executes it immediately when ready, without waiting to parse the rest of the HTML.
Let's see how we can structure these tags:
Here is what’s happening above:
essential.js (no attribute):
- The browser pauses parsing to download and execute this script immediately.
- This makes sure that core functionality (like navigation, buttons, or critical interactions) is ready as soon as possible.
nonCritical.js (with the defer attribute):
- This script starts downloading in the background while the browser continues parsing the HTML.
- It executes only after the entire document has been fully parsed, making it ideal for non-urgent features like animations, widgets, or UI enhancements.
thirdPartyScripts.js (with the async attribute):
- This script downloads at the same time as the HTML parsing and executes immediately when it’s ready.
- Because it’s independent and doesn’t rely on other scripts, it’s perfect for non-critical functionality like ads, analytics, or tracking tools.
Given the above setup, the scripts will load and execute in the following order:
- essential.js: Blocks parsing and runs first.
- thirdPartyScripts.js: Downloads in parallel and runs as soon as it’s ready.
- nonCritical.js: Executes last, after the HTML parsing is complete.
Allowing non-essential scripts to load quietly in the background and prioritizing only the most important functionality ensures that critical features like navigation, buttons, and key interactions are available as quickly as possible. This reduces the risk of bottlenecks, improves page load times, and creates a smoother, more responsive application. Prioritizing script execution is a simple and powerful way to achieve more effective JavaScript performance optimization.
9. Minify and Bundle JavaScript files
As your JavaScript application grows, so does the amount of code that needs to be loaded by the browser. You can minify and bundle your code to reduce the size of your app and speed up load times.
Minifying refers to the concept of eliminating any unnecessary spaces, characters, breaks, etc. without changing the code’s functionality. Below is a basic example of two versions of a function “sum”- the original version, followed by a minified version.
Often, a file will have hundreds or even thousands of lines of code. In those cases, minification can significantly reduce file sizes, improve load times, and ultimately provide users with the smoothest experience possible.
Bundling your code is the process of combining your JavaScript files into one or a few files. When your code is broken up into several files, the browser has to make multiple HTTP requests to fetch each one of them. As we know, HTTP requests are asynchronous meaning they take time. Having too many of them can significantly slow down your application.
Remember how we talked about HTTP caching for JavaScript performance optimization? Well, you can improve your code even further by caching a bundled file. Let’s say we have three files (index.js, app.js and api.js). We can bundle these files into one (bundle.js) and then have the browser cache that bundle. That way, after the initial load, users won’t need to download those files again- as long as the bundle remains unchanged.
With several files bundled into one and then minified, you can significantly reduce the size of your application resulting in faster load times and happy users. Fortunately there are tools for doing such.
Webpack is among the most popular tools for bundling and minifying your code. It allows you to bundle your files and dependencies and has built in plugins to minify that bundle. We won’t dive into the nuances of Webpack as its configuration can be pretty complicated. But to get an idea of how it works, take a look at the example below. We would create a webpack configuration file (webpack.config.js) and, within that file, make the necessary imports define/export our configuration object.
- Using module.exports we define and export the configuration object so that it can be used by Webpack. In Node.js, exporting a file makes it available to be used (required in) by other modules (files). This makes our object available to Webpack so that it can bundle and minify our project.
- The “entry” property specifies the entry point of the application. This is the main file that Webpack will start from to build the “dependency graph” which represents all of the modules in your code and how they relate to one another. This ensures that dependencies are loaded in the correct order and that any unused code is left out. In this case, the entry point is './src/index.js', which means Webpack will start bundling from the index.js file located in the src directory.
- The output property is an object that defines how and where the bundled files will be saved:
- filename: The filename property specifies the name of the output file. The bundled code will be saved in a file named “bundle.js”.
- path: The path property specifies the directory where the output file will be placed. path.resolve(__dirname, 'dist') creates an absolute path to the dist directory within the current folder (__dirname). This makes sure that the output file “bundle.js” will be saved in the dist directory.
- Finally, the mode property specifies the mode in which Webpack will run. Setting it to 'production' allows optimizations like minification (shortening code) and tree-shaking (getting rid of unused code), which reduces the size of the output bundle and improves performance. Production mode is typically used for deploying the application whereas setting mode to Development unlocks tools for debugging while developing an app.
Minifying and bundling are of paramount importance for JavaScript performance optimization as they shorten and trim up your code to decrease load times and keep users engaged in your application.
10. Removing unused dependencies
Every time you import a new dependency, you are adding to the size of your bundle. As we know from our last point, this can slow down load times for your application so it’s important to consider the size of the library you want to import and whether or not you need the dependencies you are importing in the first place. Often, the browser has all the functionality you need. For example, instead of importing the “axios” library for HTTP requests, we can simply use the native “fetch” API which is built into most modern browsers.
If you do need to use a method from a library but don’t want to risk slowing down load times, you might want to use modular imports instead of importing an entire library. This way, you’ll have access to the methods you need without making your bundle larger than it needs to be. Below is an example of importing the entire “lodash” library (a JS library for working with objects, arrays, numbers, strings, etc.) vs. only importing the method we want to use. Understanding what’s happening in the function isn’t as important as knowing that the code works exactly the same in both examples but the second one is more efficient.
And just to really drive this point home, neither of these imports are actually necessary as we could simply use the native JavaScript “Array.prototype.findIndex” method instead.
Conclusion
JavaScript performance optimization is an essential aspect of web development. It keeps your application user friendly by providing a fast and responsive UI while also eliminating unnecessary resource usage, maintaining scalability, and improving cost efficiency. By implementing these top 10 techniques your application will shine and you’ll keep users engaged while improving the development process for future contributors and teammates. Happy optimizing!