goldbergyoni/nodebestpractices

Performance & diagnostics best practices - call for ideas

goldbergyoni opened this issue ยท 58 comments

We're kicking off the performance best practices section. Any idea regarding performance best practices or great bibliography to extract ideas from (youtube, blog post, etc)?

This issue will summarize all the ideas and bibliography and serve as a TOC for the writers. This is also a call for write or would like to learn by practicing, discussing and writing.

@sagirk @BrunoScheufler @js-kyle

Title: Monitor first customer-facing metrics
Gist: Going by Google's SRE book, monitoring should focus on metrics that immediately impact the customer, practically the golden signals: API Latency, Traffic, Errors, and Saturation. Also relevant are the RED framework and the USE method

Anything else you monitor (e.g. event loop) is a cause and not a symptom. Quote from My Philosophy on Alerting:

"But," you might say, "I know database servers that are unreachable results in user data unavailability." That's fine. Alert on the data unavailability. Alert on the symptom: the 500, the Oops!, the whitebox metric that indicates that not all servers were reached from the database's client. Why?

Examples: we will show how to acheive this in Node.js

Perhaps debugging can be part of it? This is a good place - https://www.joyent.com/node-js/production/debug

The express documentation has good tips: https://expressjs.com/en/advanced/best-practice-performance.html

We may want to review the current production practices section, as they may overlap or be more appropriate as part of the new section, for example the bullet around serving assets from a CDN rather than Node.js

Use a version of node that ships with new TurboFan JIT compiler rather than just the older Crankshaft compiler.

Reasons why:
https://medium.com/the-node-js-collection/get-ready-a-new-v8-is-coming-node-js-performance-is-changing-46a63d6da4de

Deal with database and external APIs in batches, meaning that a developer should favor, and try to fetch a 100 entities using a single HTTP request, instead of a 100 HTTP requests with a single document each.

Same goes for database operations, writing and fetching data are faster when done in batch rather than multiple operations.

Copy pasting from the reddit discussion:

"I usually write to log/monitor on DB query start and query end so I can later identify avg query time and identify the slowest queries. Using Sequelize you can pass the flag {benchmark:true} and get the query times logged. Later you can bake/export to your monitoring system and create metrics based on it"

Use factories or constructors to ensure objects are created using the same schema so the v8 won't have to generate hidden classes on runtime (a.k.a POLYMORPHIC VS MONOMORPHIC FUNCTIONS)

https://medium.com/the-node-js-collection/get-ready-a-new-v8-is-coming-node-js-performance-is-changing-46a63d6da4de

When evaluating alternatives and need to measure performance, use benchmark tooling like auto-canon and benchmark js which can provide more precise results (microtime, run many iterations) and better benchmarking performance (e.g. issue more call per second)

Analyzing and monitoring cpu bottlenecks or memory leaks with Node Clinic/Prometheus

A few others:

  • Use gzip compression
  • Serve from a CDN
  • use a priority queue for high db usage/long running cpu processes
  • optimize queries (such as indexing)
  • parallelize operation where possible
  • use http2
  • use the cluster module

References:

i think a good performance practice is to use a load balancer for request and a load balancer to distribute the node.js app on the multiple cores of your cpu(because as we know node is single threaded). the latter can be achieved EXTREMELY easily using pm2, setting one flag that does the multi core load balancing automatically.
besides, pm2 can make really easy to monitor the memory and cpu usage of your app, and it has a lot of other amazing features.

http://pm2.keymetrics.io/

https://medium.com/iquii/good-practices-for-high-performance-and-scalable-node-js-applications-part-1-3-bb06b6204197

@TheHollidayInn Great list!

Few remarks/questions:

  1. Use gzip compression & serve from a CDN - that is more frontend related tips, are we covering frontend as well? never dealt with questions, I don't know and tend to think that - No. What does @sagirk @BrunoScheufler and @js-kyle think?
  2. HTTP2 - do we have evidence/reference that http2 reduces the load on the server?

@barretojs First, welcome on board, good to have you here. I'm not sure about this advice as PM2 is using the cluster module under the hood (I think so at least, does it?) which seems to be slower than *real router like nginx and iptables

https://medium.com/@fermads/node-js-process-load-balancing-comparing-cluster-iptables-and-nginx-6746aaf38272

What do you think?

@i0natan Np!

  1. For gzip and CDN, I included this because they are decisions you'd make while building out web apps (like using Express). If you serve static content with express, using gzip will be helpful. Also, it might be good to just note the benefits of using a CDN rather than serving your own static content.
  2. I'm not sure load is improved. Although Multiplexing may help. Here is a list of benefits: https://webapplog.com/http2-node. To me, it seems more helpful for front end, but I'll admit I'm still new to http2.

@i0natan you're right. i wasn't aware of iptables' capability, and the numbers are impressive. i think there's no doubt that for pure performance it distributes the load way better. thanks for the great in-depth article!

/remind me in 2 days

@i0natan set a reminder for Oct 9th 2018

I'd definitely +1 using the CDN for static assets - we have this as a production best practice at the moment. I think the performance section and production sections are going to have a lot of potential crossover so it's a good time to clear up the differences

Again, with gzip & other CPU intensive tasks, we actually recommend that they are better handled outside of Node for performance reasons. So I guess it's whether we want to adjust our view on this, or, as part of this section recommend to not use Node. Thoughts? See current bullets 5.3 & 5.11

๐Ÿ‘‹ @i0natan,

Adding one more which exists in our production best practices: set NODE_ENV=production if rendering on the server as it utilizes the templating engine cache

Also highly recommends this video to be included in our bibliography: https://www.youtube.com/watch?v=K8spO4hHMhg&vl=en

We should add a note on not blocking the Event loop (https://nodejs.org/en/docs/guides/dont-block-the-event-loop/)

@VinayaSathyanarayana Great resource!

This is roughly what our current suggestions are looking like in a list

Monitor customer-facing metrics
Use text compression
Set NODE_ENV to production
Avoid synchronous functions
Run Node.js in a cluster
Choose the classical for loop over forEach/ES6 of when dealing with huge amounts of data
Use a version of node that ships with new TurboFan JIT compiler
Deal with database and external APIs in batches
Benchmark slow db calls
Use factories or constructors to ensure objects are created using the same schema so the v8 won't have to generate hidden classes on runtime
Analyze repeated API Calls/Database Queries to cache them
Debounce and Throttle
Benchmarking with load testing tools
Analyzing and monitoring cpu bottlenecks or memory leaks with Node Clinic/Prometheus
Serve static assets from a CDN
Use HTTP/2
Don't block the event loop

We may cooperate or grab ideas from:
nodejs/diagnostics#211

Is it okay to submit PRs for drafts of these?

@TheHollidayInn Yes!

We've just had some discussions about this, and have decided to change the approach for developing this section. In summary:

  • we'd like to add complete practices/bullets to master, and note that the section is in progress (rather that waiting until the entire section is complete - we want to add value as soon as we can! ๐Ÿ‘
  • once we have agreed upon a performance practice here, we will create a new issue in this repository to track the work - this is good for visibility as well as ensuring we don't get double ups on two people working on the same bullet

Do you have a bullet from the list above which you have a preference to work on first? If so, I can create the first issue based on that one so we have a good model to work on for the rest of the list.

Look forward to seeing your content, as always feel free to check in with a PR early on to check your approach if needed

Great! I had some copy written up as a quick draft for using Compression. I went ahead an opened a PR here: #288. I'm happy to change anything to fit your new flow.

Not sure if it was mentioned before but here's other (perhaps not NodeJS centric) things that I think should be added:

  • Not using libraries like lodash/underscore when native approaches outweigh the use of such libraries (a whole list of examples can be found here)
  • Memoising data (e.g: calculations, I/O resources, ...) when possible
  • Tree-shaking dependencies to only use what's needed

For MySQL Users:

  • Always run chains of queries in TRANSACTION, allow mysql to handle rollback and commit don't do it yourself, this would help you move faster with less maintenance.
  • Create pool of connections in order to use more than one connections concurrently and make sure you release connection when you're done....
  • Promisify your DB Connections and run queries in paralllel via async/await and Promise.all()

@Berkmann18 Great set of advice. And welcome aboard.

Why removing orphaned dependencies (tree shaking) improves performance? what is "Memoising"?

@mohemos Welcome & Thank you. Adding some of the advice to the pool #302

@Berkmann18 Thank you.
Tree shaking improves performance because it will only include code that is used instead of importing/requiring every exported functions/data structure from modules/libraries/frameworks which will make the resulting code lighter and easier to transfer and parse.
Here's another explanation of what tree shaking is.

Example:
Importing everything but using only add;

//NodeJS
const maths = require('maths');

//ES
import * from 'maths';

Only importing add.

//NodeJS
const { add } = require('maths');
//Or
const add = require('maths').add;

//ES
import { add } from 'maths';
import add from 'maths/add';

Memoising is where a code does something then instead of doing it again, which might waste some resources to do the exact same thing as done before, it simply gets what was done and stored in a variable.

Example:
Without memoising:

const fx = (x, y) => {
  console.log('The result is', doALongCalculation(x, y));
  return doALongCalculation(x, y);
};

With memoising:

const fx = (x, y) => {
  const result = doALongCalculation(x, y);
  console.log('The result is', result);
  return result;
};

So it's basically where the code will memorise some data that will be stored in a variable instead of re-doing the computation/fetching process again and again.

@Berkmann18 Thanks for the comprehensive and great explanation.

Memoising sounds like a good fit for the General bullet - a bullet which holds all generic practices that are not node.js related

About tree shaking - I'm familiar with it, for frontend the benefit is obvious, but why would it improve backend performance? during run-time anyway all the files are parsed (this is how 'require' work), do you mean to include also re-bundling of the code and pack in less files? I'm looking for the exact part that will boost performance

@i0natan You're welcome ๐Ÿ˜„ .
Tree shaking will prevent servers from fetching things it doesn't need which enable it to get to the main code faster as well as not having to go through more resources than it needs to run.
It might not make much of a difference on static dependencies (aside from the size of the backend codebase) but for dynamic ones, it does (at least a tiny bit).
When it comes to bundling, it will allow the bundling tool to include less stuff which would encourage for a smaller, more optimised bundle.

We should add throttling to the list

We should add
"Avoid Functions in Loops" - Writing functions within loops tends to result in errors due to the way the function creates a closure around the loop

@Berkmann18

It might not make much of a difference on static dependencies (aside from the size of the backend codebase) but for dynamic ones, it does (at least a tiny bit).

Can you provide an example, maybe even using code, and some data/benchmark that shows the impact? just want to ensure we address popular issues that leave a serious performance foortprint

@VinayaSathyanarayana Thanks for joining the brainstorm :]

"Avoid Functions in Loops" - Writing functions within loops tends to result in errors due to the way the function creates a closure around the loop

Can you provide (pseudo) code example? are we concerned because of performance or the code readability/errors?

@i0natan Sure, I've made a repo with benchmarks using only math as an example.

@AbdelrahmanHafez For your proposed idea Deal with database and external APIs in batches, do you have any links or references for this?

* Promisify your DB Connections and run queries in paralllel via async/await and Promise.all()

Not only for DB purpose, using Promise.all instead of lot of async/await should be a performance best practice, i.e. we need to call 4 API calls correctly with Axios to build our complete response, instead of using await for each one is better to use a single await with Promise.all().

@migramcastillo Makes a lot of sense. Do you want to write that bullet?

@js-kyle Maybe add this idea to our TOC issue?

@migramcastillo Makes a lot of sense. Do you want to write that bullet?

I'd be glad to, in which branch should I make the PR?

I'd be glad to, in which branch should I make the PR?

@migramcastillo If you meant to ask what branch to raise your PR against, that would be master. If you meant to ask what branch to make your changes in, feel free to create a new branch in your fork.

stale commented

Hello there! ๐Ÿ‘‹
This issue has gone silent. Eerily silent. โณ
We currently close issues after 100 days of inactivity. It has been 90 days since the last update here.
If needed, you can keep it open by replying here.
Thanks for being a part of the Node.js Best Practices community! ๐Ÿ’š

* Promisify your DB Connections and run queries in paralllel via async/await and Promise.all()

Not only for DB purpose, using Promise.all instead of lot of async/await should be a performance best practice, i.e. we need to call 4 API calls correctly with Axios to build our complete response, instead of using await for each one is better to use a single await with Promise.all().

Although Promise.all is better than sequential non-dependent promises in most of the cases, It works best only when the number of calls/promises are known before hand or atleast few. In case of dynamic number of promises, e.g. batching in runtime and if the number of calls increases significantly it might endup exhausting all the connection pool or create deadlocks or if it http calls then spam the service. sample stackoverflow link https://stackoverflow.com/questions/53964228/how-do-i-perform-a-large-batch-of-promises

  1. Maybe have a section for in-memory cache and best practices around implementing and maintaining cache. There are few packages that supports this functionality. Also it seems there is no shared in-memory cache implementation possible in cluster mode.

  2. Would like to highlight a recent video released by chrome team https://www.youtube.com/watch?v=ff4fgQxPaO0, that talks about how to have faster apps when dealing with large object literals.

When writing NPM module, it's a good idea to measure the time needed to require() the module you are developing, e.g.

console.time('require');
const myModule = require('my-module');
console.timeEnd('require');

Reason: this time is added to the overall startup time of every code that uses your module. It doesn't play a big role for long-running servers, but is very important for CLI utilities (pre-commit hooks, linters etc.).

Few examples:

How to fix/mitigate:

  • provide granular exports in your module, like ramda and lodash do - consumers may require() only what they need
  • use exact imports, e.g. const complement = require('ramda/src/complement'); instead of const R = require('ramda');
  • use https://www.npmjs.com/package/import-lazy or something similar, require modules only when you are 100% sure you need them, e.g. ESLint should not load every plugin listed in config, but only those needed to lint specified files (don't load TSX rules if you are checking just plain JS file, don't load mocha-specific rules if there is no mocha test file to lint)

Reduce hidden classes number

Benefits from the inline cache and let Turbofan generate optimized assembly code

Use the ES Lint sort-keys rule to ensure same internal hidden classes for your object.

This will allows to fully benefits from the inline caching. Also Turbofan will be able to compile the JS code into optimized bytecode.

If you are interested I can write a section on this topic, providing code example, performance diff, etc

Hidden classes and Inline cache
Inline Caches with Monomorphism, Polymorphism and Megamorphism
Interpreter and Compiler: hidden classes, inline caching, polymorphism and megamorphism
Turbofan Speculative Optimization

Prefer monomorphic function

Prefer the usage of monomorphic function instead of polymorphic functions.

This will allows:

  • Turbofan to compile them into optimized assembly
  • better inline caching for fast property access
example
// don't
function add (a, b) {
  return a + b
}
add(21, 42);
add(21.2, 42.3);

// do
function addInt(a, b) {
  return a + b;
}
function addFloat(a, b) {
  return a+ b;
}
addInt(21, 42);
addFloat(21.2, 42.3);

If you are interested I can write a section on this topic, providing code example, performance diff, etc

Inline Caches with Monomorphism, Polymorphism and Megamorphism
Interpreter and Compiler: hidden classes, inline caching, polymorphism and megamorphism

Avoid dynamic allocation of function

Avoid to declare function inside function.

Function are object and object allocation is costly as well as the garbage collection.
Also, since the function is re-declared at each call, v8 will not be able to optimize it (compilation into optimized assembly, inline caching for fast property access)

example
// don't
function add(a, b) {
  const addInt = (a, b) => {
    return a + b;
  }

  if (typeof a === 'number') {
    return addInt(a, b);
  }
  
  if (typeof a.x === 'number') {
    return addInt(a.x, b.x);
  }
}

// do
function addInt(a, b) {
  return a + b;
}

function add(a, b) {
  if (typeof a === 'number') {
    return addInt(a, b);
  }
  
  if (typeof a.x === 'number') {
    return addInt(a.x, b.x);
  }
}

If you are interested I can write a section on this topic, providing code example, performance diff, etc

Closures Compilation and Allocation
How Closures Works

Avoid process.env. Copy and cache it: https://www.reddit.com/r/node/comments/7thtlv/performance_penalty_of_accessing_env_variables/
The whole concept of object shapes: https://mathiasbynens.be/notes/shapes-ics (previously mentioned in the context of monomorphic params, but it's also important in the context of initializing variables)
Avoid using express.js (it is really, really slow: https://github.com/fastify/benchmarks)

@goldbergyoni Are you still looking for contributor on this topic?

There are really good ideas here! We should add it

I also have some:

  1. Prefer branchless programming (but make sure it's not effecting readability and always measure before to check if it worth it - as all performance tips)
  2. Wrap large data with closure so it can get garbage collected
// Bad - really large file not getting garbage collected 
function run() {
   const reallyLargeFile = readFile()
   const transformed = transformData(reallyLargeFile)

   // do some extra work

}

// Good - really large file  getting garbage collected
function run() {
   let transformed;

    // prefer extract to function instead of empty closure
   {
     const reallyLargeFile = readFile()
     transformed = transformData(reallyLargeFile)
   }

   // do some extra work

}
  1. Read large files in streams instead all at once
  1. Wrap large data with closure so it can get garbage collected
   // prefer extract to function instead of empty closure
   {
     const reallyLargeFile = readFile()
     transformed = transformData(reallyLargeFile)
   }
   // do some extra work

Looks like in order to benefit from this change, the "extra work" part

  • either should have a significant execution time, e.g. 5 seconds - then in original code reallyLargeFile will be unavailable for GC during these 5 seconds, and in modified code, it could be GCed if the engine decides to start GC during that time
  • or should have high memory consumption, to trigger Full GC (usually the engine gives it a last try and tries to free all possible memory before throwing the "out of memory" error)

I think these details should be noted - the readers should not follow the advice blindly but should have some understanding of why and when it helps.

Also, it would be great to have some proofs or benchmarks to illustrate this idea.

  1. Read large files in streams instead all at once

But we've already seen const reallyLargeFile = readFile() in advice number 2. Could you please make the code example from advice 2 consistent with advice 3 (i.e. use something else instead of reading a large file at once)?