Optimizing Node JS code

black laptop beside black computer mouse inside room
Reading Time: 3 minutes

At different levels you might optimize your JavaScript code. Sometimes optimization is a matter of good practices such as avoid using logging in loops.

This is not a holy bible, it’s just a guide with some tips that can be implemented in your projects or not. There’s not recipes, just good practices.

Most of these tips also can be applied also to other programming languages

Logging

It’s normal and necessary we add some log lines to have some clues when things go in the wrong direction. Logging is not cheap and even more if we print dynamic logs such as:

      
console.log('My variable value is: '+myVar);
   
    

As a thumb rule for logging is to avoid printing them in loops. So, avoid deploy code to production like this one:

      
for(let i = 0; 1 < 10; i++){
  console.info('I am '+i);
}
   
    

SQL queries

SQL queries are our big bottleneck most of the time so cache as much as possible to avoid unnecessary round trips.

Luckily, there’s an easy way to know how much times does a particular SQL takes:

      
console.time('myQuery');
//execute query
console.timeEnd('myQuery');
 
    

The above code will print:

myQuery: 2398ms

Cache

Web service cache level.
I used the module apicache. By defaults, it works as an in-memory cache but you can also configure it to make it persistent with Redis.

Database level
I never used in Node a module that resolves database cache. I just stored some results in variables. That was enough for my requirements.

async/await
async and await keywords are great. Make our code more readable but sometimes we forget that we should parallelize as much as possible. Let see an example:

      
// bad 
async function getUserInfo(id) {
    const profile = await getUserProfile(id);
    const repo = await getUserRepo(id)
    return { profile, repo }
}
 
    
      
// good 
async function getUserInfo(id) {
    const [profile, repo] = await Promise.all([
        getUserProfile(id),
        getUserRepo(id)
    ])
    return { profile, repo }
}
 
    

Promises

This way we are running our heavy operation in the main thread (the Event Loop)

      
return new Promise((resolve, reject) => {
        //my heavy operation
        return 'something';
    });
 
    

This way the code runs in a separate thread, different than the Event Loop

      
return  Promise.resolve().then(() => {
        //my heavy operation
        return 'something';
    });
 
    

Some benchmark of this

      
const size = 1000 * 1000 * 100;
const array = new Array(size);
 
function doSomethingHeavy() {
    let i = 0;
    while(i < array.length){
        i++;
    }
  }
 
  function doSomethingHeavyWithPromise(){
      return new Promise(function(resolve, reject){
          doSomethingHeavy();
          return 'done with promise';
      });
  }
 
  function doSomethingHeavyWithEnhancedPromise(){
    return Promise.resolve().then(function(value){
        doSomethingHeavy();
          return 'done with enhanced promise';
    });
}
 
  console.time('promise');
  doSomethingHeavyWithPromise().then(function(result){
    console.info(result);  
  });
  console.timeEnd('promise'); 
  //prints: promise: 69.772ms (It's using the main thread blocking the event loop! )
 
  console.time('enhancedPromise');
  doSomethingHeavyWithEnhancedPromise()
  .then(function(result){
    console.info(result);  
     
  });
  console.timeEnd('enhancedPromise');
  //prints enhancedPromise: 0.135ms (it uses a separate thread leaving the event loop free for other requests)
 
    

Different flavors of for

In JavaScript, we have several ways to iterate using for sentence. Let see with an example to know which of them is the most efficient.

      
const size = 1000 * 1000 * 10;
const array = new Array(size);
function doSomething() {
  let i = 0;
  i++;
}
 
console.time("classicForWithLength");
for (let i = 0; i < array.length; i++) {
  doSomething();
}
console.timeEnd("classicForWithLength");
 
console.time("classicForWithSize");
for (let i = 0; i < size; i++) {
  doSomething();
}
console.timeEnd("classicForWithSize");
 
console.time("forEach");
array.forEach(element => {
  doSomething();
});
console.timeEnd("forEach");
 
console.time("forIn");
for (let e in array) {
  doSomething();
}
console.timeEnd("forIn");
 
console.time("forOf");
for (let e of array) {
  doSomething();
}
console.timeEnd("forOf");
 
console.time("forEachWithFunction");
array.forEach(function(item, index, object) {
  doSomething();
});
console.timeEnd("forEachWithFunction");
 
console.time("forEachWithArrow");
array.forEach((item, index, object) => {
  doSomething();
});
console.timeEnd("forEachWithArrow");
 
    

The output of this script:

      
classicForWithLength: 21.604ms
classicForWithSize: 11.532ms
forEach: 32.330ms
forIn: 59.182ms
forOf: 185.412ms
forEachWithFunction: 32.033ms
forEachWithArrow: 32.564ms
 
    

Interesting conclusions we might write:

The fastest is the classic for sentence

i < constantValue is better than i < myCollection.length So you should use the classic for when iterating big collections! Tools Luckily we have a lot of awesome tools to make some benchmark such as jMeter o Artillery. They are used mostly to make web services load test.

Artillery

I used Artillery and it’s a nice tool. I started with a simple Hello World using CLI but also we might write some tests in YAML files.


About the author

Andrés Canavesi
Andrés Canavesi

Software Engineer with 15+ experience in software development, specialized in Salesforce, Java and Node.js.


Related posts


Leave a Reply

%d bloggers like this: