
Performance optimizations for JavaScript applications
Intro:
I have been a front-end software developer for quite a few years now. I have worked my entire career in service companies, which exposed me to various projects and problems in general. Mainly my experience is with the Angular framework, but in the context of the topic, that's not relevant.
The goal:
Make life of developers a bit easier when they carry the burden of performance optimizations. I will share what I did and hopefully will save a few minutes/hours for more pleasurable activities.
The problem:
When we talk about performance issues, one word pops in my mind: speed. That's the reason behind my main image choice.
So, the question is how to increase the speed of our app. The short answer is: we can't.
So that's why this article is so short…
Just kidding. We can't increase the maximum speed of our application /calculations per second/, because that would mean buying faster hardware, which is not a very practical thing to do in my opinion.
So we have to do something else. I hope all of the readers remember the simple equation of motion from 7th grade: S = V * t -> The distance travelled is equal to the velocity times the time . We already realized that our V component is kind of a constant, so that leaves us with the option to influence as much as we can the S component.
Or to put it more graphical:

There is exactly zero difference in the three paths that we take in the above illustration. The only important thing is that we get from point A to point B. That would mean cutting some corners is allowed and even needed. Or doing less in order to get the same result. We have to strive to make things move in a straight line, with least effort possible. Our goal is to identify which paths are not optimal so we can straighten them out, without affecting the result of our code.
1. Did you minify the bundle?
99.99% of the cases that's done, but worth a check. I have stumbled upon cases where this is not true and it came as a huge surprise to me. The basic idea here is that the browser doesn't need all the metadata that comes from how we humans write our code. We use long words to name our variables, format code to be visually structured, leave comments, etc. All of those are mandatory for a human to be able to read the code, but since the browser doesn't need any of it, we can remove them, while keeping the logic behind that.
Input JS:

Output:

As you can see, the original input has a lot more memory footprint in comparison with the minified version. Smaller size means faster load, so good for us.
2. Images:
Revise the usage of images and heavy content in your application.
If they are needed anyway, optimize them. Use the right format. Load images lazy. Load the images in the right size. It doesn't make sense to download 4mb high resolution image for a 300x150px thumbnail. If possible, download the images from a CDN.
This is a huge topic itself, but there are a few rules that save a lot of resources. Be sure not to load a picture larger than the resolution it's gonna be shown with. If there are a lot of pictures on a single page, you can implement a lazy loading mechanism. And also take in mind the geography , because it's one thing to load a 2mb pic from a "nearby" server, it is another to get it from the other side of the world.
3. Resources serving:
And by resources, I mean all of the files generated from your frontend build - html, css, js, static resources. You can save up to …..you can save a lot, if you serve them gzipped.
4. Initial load:
It is often the case with SPAs that we don't need the whole app loaded on the first landing of the user. We can apply a little finesse and figure out what is the critical part of our app that needs to be loaded initially and require everything else on demand. The downside of this is that you will slow the loading of the other pages a bit, but in most cases this is very insignificant compared to the benefit of the fast initial load.
5. Third party libraries:
I can't imagine being a developer if a lot of people before me haven't invested so much time in creating great tools and packages that I use every day, but you should not use them like a panacea. The most common situation I have observed is using a really small part of a large package, but importing the whole library.

Before using a package, I ask myself a few simple questions:
Will it save me time or can I implement it better?
What will be the impact on my bundle size?
What is the performance of the functionality that I am gonna use?
Let me share an example. It came as a bit of a shock to me:
In a large project, dealing with a lot of data, we were using moment.js for basic date manipulations, like comparison between 2 moments, calculating the days between 2 moments, etc. It came to a huge surprise to me how slow those functions can get. Combined with a lot of unnecessary calls, the functions resulted in a lot of time wasted on nothing.
So, the bottom line: Check what you are using as dependencies and analyze what you can live without. This will help you out ;)
6. API calls:
I know, they are a bit out of our power as front-end developers to optimize, but we can revise what we call and how often we call it. The second part is the most important one, because it's a common case where two features are developed simultaneously by small sub teams where the design hasn't been discussed initially, so both features end up doing the same requests several times. We can optimize that by caching the response and reusing it.
PS: This is a double edged sword because, we also have to think of cache invalidation, so we don't end up with old data.
Also an example from my experience:
We had a dropdown component, which was some kind of inherited codebase and was working perfectly fine, until we created a page, where we had like 50 of it. The problem was not with the dropdown, but rather with the mindset, while it was developed. It was coupled with the API, which it received the options from and as you can already guess, it was making 50 of the exact same requests… Yeah, I know.
The solution: Since I didn't want to go through the huge project and refactor the logic for the data supplying mechanizm, I decided that I will only touch the component and introduce a caching strategy there. The basic idea was to have a centralized place for all of the instances of the component and supply the data from there. If the data was already present or an API call was already in progress, I wasn't triggering another request. That saved a lot!
7. Memoization
Guess what: We can cache not only data, but results from function calls. More precisely pure functions. If you have that case and call that function multiple times you can reuse the result from the previous call with the same parameters. You will be pretty shocked how often a function can be called if it depends on some 3rd party library/component which we didn't dig too much into when we are using
Here is the example: Handsontable custom renderer. It's basically called on every possible event that happens in the browser, so think of a number……and multiply it by 1000. You may come close.
In that case it will be pretty fast to have the result of the function ready, without actually running the function. That's the idea behind memoization. Every time you call the function with a parameter that's new ( scope of new can be different) it stores the result for future usage and if it happens that it's called again with the same input -> the result is extracted from the cache and we save one function call/run.
8. Using the right data structure
Maybe the most beneficial refactoring that I have done regarding performance. Imagine that you have 2 collections, each with several thousand entities and the most frequent usage in your application is finding the right entry in one of the collections by some criteria.
If you are a victim of a not well designed state of the application, or the requirements to support 30 entries go to 3000, or 30000 - don't worry, there is an approach to that also. You just have to restructure the data, based on your specific criteria and that will save you a lot of time.
So let's make an example for that also:

Looking at the image above, let the left side be the representation of an array with 30k entries not ordered or sorted in any way. Finding what we seek means comparing every single item with our desired result. That's gonna take huge amount of time, especially if our item is at the back of the cabinet (end of the array).
The right side is the data structure, considering our most common usage. Maybe it's a Map and inside you also have a Map. That way you can directly access what you need.
Let me demonstrate here:
First, I am gonna show you how slow can it get to store the data as a plain array:

We have a collection with objects with a few props, as you can see in the screenshot. Let's try simulating repetitive finding in that collection and see how long it takes for a million searches by two criteria:

Around 3 seconds. Take in mind that there are some optimizations, because we are searching the same thing every time and the CPU is good when we are making predictable calculations.
Now, let's organize our cabinet a bit:

I have reduced the same collection to a simple object, where the key is age and one level deeper we have again an object where the keys are the names of the items.

As you can see, there is a huge improvement for the same operation and the same amount of searches. 3.6ms in comparison to ~3000ms. Around 850 times faster!
9. Web workers:
If you insist on having those fancy animations and complex data processing on the browser, there is still hope for you. You can take advantage of a web api called Web worker, which is basically overcoming the main shortage of JS - single thread. It is an opportunity to run some heavy logic in the background, which results in freeing some capacity on the main thread to deal with the user interface.
How to identify poor performance code:
The only tool that I used for that purpose is Chrome Profiler. This will be my next topic, but you can go ahead and try it yourself. At first it's a bit overwhelming, but once you get a hold of it is really intuitive and easy to use. There you can peek in the details of how much time each of your many complex functions take and prescribe a diet to the most heavy ones.
Conclusion:
In modern web applications it's often hard to maintain high performance and smooth user experience, but it's not impossible for sure. At the price of some head banging you can always drain a few milliseconds /or more/ and hopefully satisfy the requirements and most importantly yourself with the speed boost. When implementing a functionality, there are always opportunities to make it faster, you just have to think in that direction - doing the absolute minimum in terms of the number of operations required to satisfy your goal. When dealing with large datasets, always think how they are gonna be used, how often they are gonna be used, are you able to cache something, etc.
Take care and stay awesome!