When it comes to optimizing your JavaScript code, there are countless tools and techniques to implement and there are plenty of articles out there that can offer valuable tips and tricks. However, many of them don’t provide very much context for implementation of these approaches. In this article, we will not only cover the most common techniques that we found but also provide examples and break down just how they can be implemented in your applications. Here are the top 10 techniques that we found to optimize your JavaScript code.
If you’re unfamiliar with the phrase “DOM manipulation” or you’ve heard it but are still unsure of what it actually is, take a look at this simple example. The DOM (Document Object Model) is a programming interface that represents a web page as a tree of objects.
Each element (as seen in our HTML file below) is an object (node). When we modify the content or behavior of these objects in our tree (the DOM), we are practicing DOM manipulation.
Below is an html file as represented in the DOM tree above. This document displays the text “Hello, World” and a button with the value “Change Text”
In our JavaScript file below, the “getElementById” method in our document object accesses the element in our HTML with the id of “greeting” (h1 element above) and assigns it to “greetingByElement”. We then do the same on the next line for “changeTextBtn” (our button element). We then add an “event listener” to our button so that when it’s clicked, our h1 element will display “Hello, JavaScript!”.
Manipulating the DOM too frequently can be costly because every time the DOM is changed, the browser may need to recalculate the styles (revalue) and redraw parts of the page (repaint). By minimizing DOM manipulations or batching them together, you can reduce the number of reflows and repaints, resulting in smoother performance. Document “fragments” can be used to hold DOM elements temporarily in memory, before appending them to the document in a single operation. Think of it as saving up all of the changes you’d like to make and then applying them in one fell swoop. Below, we use a “for loop” to iterate through some data and create 100 new div elements with some content in them. We then append each of those divs to a “fragment”. Once the loop is complete, the “fragment” containing all of our new divs is appended to our DOM. Now the DOM re-renders once with all of our new divs instead of 100 times.
Using methods like “requestAnimationFrame”, “setTimeout” ,“setInterval” and utilizing the “innerHTML” element may also help minimize the number of DOM reflows and repaints by batching updates. Libraries like React are worth considering when building out your front end as they abstract away the DOM update process and optimize when and how updates are made by using a "virtual DOM”.
JavaScript offers several ways to iterate through arrays and other iterable objects. For example, “forEach" is a higher order function that will iterate over an array and execute a function on each element within the array. Below, this method will iterate through “array” and log each element to the console.
While methods like “forEach", “map" or “reduce" can be useful and even fun to implement in your code, they are not always the most efficient option- especially when iterating through a lot of data. Often, a classic “for loop” or “for…of loop” is a better choice. This is because these methods iterate through the entire data set and do not allow for early exits (break or return) as a “for loop” would. Below, this “for loop” will iterate through “array” and log each element to the console. If the element is even, we exit this loop with “return”. Exiting a loop like this would not be possible using “forEach”.
Below, we see a “for…of” loop. This would be used when you want to iterate through an array but do not need access to each element by its index.
A “for…in” loop is designed for iteration over keys (property names) in objects. These loops can technically iterate over arrays but it’s generally not recommended because they will also iterate over any custom properties or methods that are added to an array as well as properties added to its prototypal chain.
Using a “for loop” in your JavaScript performance optimization can often be the best option as it’s faster and can exit early when needed. While using these loops, it’s important to use break or return when you can and to be careful of nested loops as they become expensive for larger datasets. Higher-order functions like “map”, “filter”, and “reduce" might be the right choice for readability but may introduce unnecessary computational costs. Making informed decisions regarding array and object iteration ensures that your code performs well, even with large data sets.
Optimizing how you retrieve the information stored in your data structures, prevents your JavaScript from doing extra work. Sure, looping is often necessary, but grabbing your data directly instead of iterating through can significantly speed up your code. Objects and arrays are containers for related information and the better you get at pulling out the bits that you need, the smoother and snappier your applications will be.
Accessing data in objects or arrays repeatedly can often be an area where your code is working too hard. If you need to access the same property or element multiple times in a loop, it can be more efficient to store it in a variable. Similarly, accessing the array’s length property inside of a loop is a common practice, but by doing this, the length is recalculated on every iteration. By caching both the data and the array length, you can make your loops cleaner and faster.
In the image above, both items[i] and items.length are accessed repeatedly in the inefficient example. Caching these values into variables (here we used “item” and “len”) reduces repetitive lookups and speeds up the loop. This is especially helpful when working with large datasets.
Storing data in an object instead of an array can also be a great way to optimize your JavaScript performance. Iterating through an array (especially a large one) can take up a lot of extra time.
On the left, our code iterates through the users object and checks if the current user’s “id” matches 2. Instead, it’s far more efficient to access the user with the “id” of 2 as seen on the right. Our example on the left is using linear O(n) time complexity as the time it will take to complete depends on the length of the array. On the right, we observe constant O(1) time complexity as we are not iterating but, instead are accessing the property directly.
Deeply nested objects can be a lot like those Russian dolls where you have to open up each one, layer by layer to reach the doll at the core. Similarly, accessing properties within nested objects can require a lot of effort from your Javascript. To take some of that load off, we can flatten our structure.
The example above declares a recursive function to flatten our deeply nested object so that all of the properties are accessible as top-level keys. If this code looks a bit intimidating, don’t worry! All you need to know for now is that, once our object is flattened, we can access the properties directly as seen below.
Whenever you can, go for direct property access instead of using dynamic keys. Dynamic keys (like object[key]) can be useful when you need to figure out the key at runtime, but the browser has to calculate the key before grabbing the value. Directly accessing a property (ie: object.property) can be a better option for optimizing speed. Dynamic keys are still super useful when iterating over an object’s keys or dealing with property names that aren’t set in stone. In those cases, having the flexibility of dynamic access is more important than direct access.
Directly accessing a property–as shown above– can also simplify the debugging process, as the property names are stated explicitly rather than hidden in variables.
If you’re looking for another way to optimize your Javascript in frequent look-up situations or unique collections:
By caching lookups, minimizing your redundant operations, simplifying structures, and choosing the right tools like Maps and Sets, you can handle even large datasets with ease.
Http caching (storing frequently accessed data locally) is a fantastic way to optimize your JavaScript code as it reduces server requests and load times. Think of it like checking the fridge for a snack before driving all the way to the store to buy it. Let’s say you have an image that needs to be displayed on a user’s web browser (the client) whenever they load your homepage. The first time the user loads the page, a request is made to your server and your server responds with the image which then loads to the screen. This request and response take time and caching the image file is a great way to speed this process up as it allows the client to first check a cache for the image. If it finds the image there and loads it to the screen without making a request to the server. If the image is not there, or maybe has expired, the request for the image is sent to the server.
During the development process of a server, headers can be set on responses to instruct the client on how to handle caching the response. Common caching headers are “cache-control”, “expires", “ETag”, “last modified”, and “vary”. Today, “cache-control” is used more frequently than other headers. Here’s what you need to know:
In this example, we:
The use of http caching is a bit nuanced and may seem like an intimidating and complicated practice but, once you get the hang of it, it can be a fantastic way to speed up your application and optimize your JavaScript code.
Working with throttling and debouncing may also seem a bit daunting if you’re new to software development but you probably benefit from the implementation of these two performance optimization techniques every day.
Throttling is a technique that ensures an operation triggered by an event (such as a button click) will occur once during a specified time period, regardless of how many times it is triggered in quick succession.
Imagine you have a scroll event listener, and you want to execute a function only once every 500 milliseconds while the user is scrolling (in this case our function is simply logging ‘function executed!’ to the console.
In the example above, we define a variable, “lastExecution" to track the last time a function was invoked. In the “throttledFunction”, we compare the current time (labelled “now”) with the last invocation time. If 500 milliseconds have passed, the function logs "Function executed!" to the console and updates “lastExecution” to the current time. Finally, we add an event listener to the window object (the browser). This tells the browser “if you detect any scrolling, execute ‘throttledFunction’. ”
Without throttling, events triggered by scrolling, resizing, clicking, etc. could be called hundreds or even thousands of times in a very short period which can lead to slow rendering or high volumes of requests.
Debouncing is used to execute a function after a certain amount of time since the last time the event was triggered. In other words, the function is “delayed” until the event stops firing for a certain amount of time. You may have noticed this functionality each time you use a search bar on a website- you type a few characters and, if you stop typing for a second or two, the search bar tries to make a prediction of what you might be searching for.
Debouncing can be used in the following way:
This is great for optimizing JavaScript performance as it reduces unnecessary function calls.
We take a more in depth look into these two techniques in our article Your Guide to Debouncing in JavaScript so, as you’re expanding your understanding of them, be sure to check it out. Once you get a bit more comfortable with them, they can be used to optimize your app by controlling how many events are triggered and how often.
When it comes to JavaScript performance optimization, turning the execution of long operations (like fetching data from an API or loading large datasets) into asynchronous code is a must. Remember, JavaScript is single threaded (executes one line of code at a time). If you have a line of synchronous code that takes a while to complete, it will block your main thread. This can cause your entire user interface to become unresponsive and, if a user interface is unresponsive for even three seconds, users may disengage or leave. In the example below, the thread of execution will not move on to the console.log until the “for loop” completes.
Asynchronous code allows the rest of your code to run while longer operations are being executed. Think of it like this- you’re cleaning your house and you get hungry. You decide to order a pizza so you can enjoy a delicious meal in your freshly cleaned living room. You wouldn’t wait until the pizza was delivered until you carried on with your chores. Instead, you would order your food, continue cleaning and enjoy your pizza once it’s delivered.
In order to optimize your JavaScript code, it’s always a good idea to use asynchronous code for tasks like:
Let’s take a look at some common ways to handle asynchronous operations in your code.
setInterval will continue running a function at a given interval. Like setTimeout, it takes two parameters: the function to be run and the amount of time to let pass between invocations. Take a peek at Codesmith’s Asynchronous JavaScript unit in CSX if you want to learn more about these methods.
The “fetchData” function is designed to do the following:
Asynchronous code can feel a bit tricky. But when it comes to JavaScript performance optimization, this technique is essential in offloading long running tasks from the main thread keeping your app responsive and performant. For more information on the inner workings of asynchronous code, check out Codesmith’s “Hard Parts” lecture on Async and Promises.
Network requests are essential to most web applications. Whenever you visit a website, the browser sends a network request to a server to ask for the files needed to display the webpage (scripts, images, stylesheets, etc.). The requests travel over the internet, and the server responds by sending the requested resources (or an error!) back to the browser.
For applications that only render small pages with limited data, the process can happen quickly. But for data heavy applications that need to load up lots of images or third-party scripts (ads, embedded video content, etc.) these requests can accumulate, slowing down load times. In this case, we can implement “lazy loading” to delay the loading of specified resources until they’re actually needed, which makes this an excellent tool for Javascript performance optimization.
Images are one of the more common use cases for lazy loading as they can have larger file sizes which takes them longer to process. In vanilla Javascript, you can use an API called “IntersectionObserver” to load your images only as they scroll into the browser’s viewport.
In this example HTML, each image is set with a lightweight placeholder image, and a “data-src” attribute to hold the real image source.
If Javascript performance optimization is your priority, Lazy loading can be tremendously helpful. When you only load the resources you need, you improve your initial load times because fewer resources are required upfront. This improves overall user experience because your application will be more responsive and speedy, especially on slower networks.
While lazy loading delays resources like images or infinite-scroll data (social media feeds for example) until they need to be displayed, deferring or delaying JavaScript helps us prioritize scripts based on when we need them to run.
By default, the browser parses your HTML file line by line from top to bottom. When it encounters a <script> tag, it pauses parsing to download and execute the script. This means, if all your script tags are at the top of the file, the browser can get stuck processing them, leaving users staring at a blank screen while critical elements take longer to load.
To avoid this, HTML provides two helpful attributes to control script loading: defer (convenient name!) and async. These attributes tell the browser how to handle our JavaScript files so that the rendering of the application won’t be unnecessarily blocked.
Here is what’s happening above:
essential.js (no attribute):
nonCritical.js (with the defer attribute):
thirdPartyScripts.js (with the async attribute):
Given the above setup, the scripts will load and execute in the following order:
Allowing non-essential scripts to load quietly in the background and prioritizing only the most important functionality ensures that critical features like navigation, buttons, and key interactions are available as quickly as possible. This reduces the risk of bottlenecks, improves page load times, and creates a smoother, more responsive application. Prioritizing script execution is a simple and powerful way to achieve more effective JavaScript performance optimization.
As your JavaScript application grows, so does the amount of code that needs to be loaded by the browser. You can minify and bundle your code to reduce the size of your app and speed up load times.
Minifying refers to the concept of eliminating any unnecessary spaces, characters, breaks, etc. without changing the code’s functionality. Below is a basic example of two versions of a function “sum”- the original version, followed by a minified version.
Often, a file will have hundreds or even thousands of lines of code. In those cases, minification can significantly reduce file sizes, improve load times, and ultimately provide users with the smoothest experience possible.
Bundling your code is the process of combining your JavaScript files into one or a few files. When your code is broken up into several files, the browser has to make multiple HTTP requests to fetch each one of them. As we know, HTTP requests are asynchronous meaning they take time. Having too many of them can significantly slow down your application.
Remember how we talked about HTTP caching for JavaScript performance optimization? Well, you can improve your code even further by caching a bundled file. Let’s say we have three files (index.js, app.js and api.js). We can bundle these files into one (bundle.js) and then have the browser cache that bundle. That way, after the initial load, users won’t need to download those files again- as long as the bundle remains unchanged.
With several files bundled into one and then minified, you can significantly reduce the size of your application resulting in faster load times and happy users. Fortunately there are tools for doing such.
Webpack is among the most popular tools for bundling and minifying your code. It allows you to bundle your files and dependencies and has built in plugins to minify that bundle. We won’t dive into the nuances of Webpack as its configuration can be pretty complicated. But to get an idea of how it works, take a look at the example below. We would create a webpack configuration file (webpack.config.js) and, within that file, make the necessary imports define/export our configuration object.
Minifying and bundling are of paramount importance for JavaScript performance optimization as they shorten and trim up your code to decrease load times and keep users engaged in your application.
Every time you import a new dependency, you are adding to the size of your bundle. As we know from our last point, this can slow down load times for your application so it’s important to consider the size of the library you want to import and whether or not you need the dependencies you are importing in the first place. Often, the browser has all the functionality you need. For example, instead of importing the “axios” library for HTTP requests, we can simply use the native “fetch” API which is built into most modern browsers.
If you do need to use a method from a library but don’t want to risk slowing down load times, you might want to use modular imports instead of importing an entire library. This way, you’ll have access to the methods you need without making your bundle larger than it needs to be. Below is an example of importing the entire “lodash” library (a JS library for working with objects, arrays, numbers, strings, etc.) vs. only importing the method we want to use. Understanding what’s happening in the function isn’t as important as knowing that the code works exactly the same in both examples but the second one is more efficient.
And just to really drive this point home, neither of these imports are actually necessary as we could simply use the native JavaScript “Array.prototype.findIndex” method instead.
JavaScript performance optimization is an essential aspect of web development. It keeps your application user friendly by providing a fast and responsive UI while also eliminating unnecessary resource usage, maintaining scalability, and improving cost efficiency. By implementing these top 10 techniques your application will shine and you’ll keep users engaged while improving the development process for future contributors and teammates. Happy optimizing!