Fastify under-pressure how to find out which routes are reasons
meotimdihia opened this issue · 12 comments
Do we have a way to find out which routes are the problem?
fastify.register(underPressure, {
maxEventLoopDelay: 1000,
exposeStatusRoute: true,
pressureHandler: (req, rep, type, value) => {
if (type === underPressure.TYPE_HEAP_USED_BYTES) {
fastify.log.warn(`too many heap bytes used: ${value}`)
} else if (type === underPressure.TYPE_RSS_BYTES) {
fastify.log.warn(`too many rss bytes used: ${value}`)
} else if (type == underPressure.TYPE_EVENT_LOOP_DELAY) {
fastify.log.warn(`reached max even loop delay: ${value}`)
fastify.Sentry.captureMessage('reached max even loop delay');
}
rep.send('out of memory') // if you omit this line, the request will be handled normally
}
})
Please give me a tip/guide. I don't know how to do it.
req
should have all the information regarding the route. I'm a bit concerned about that Infinity
, can you add a full reproduction?
@mcollina thanks. This project's source code is big, I need to find out the reason first. I'll add a demo if I can reproduce it.
@mcollina I still can't fix this problem. My server just used 33% CPU and 50% memory.
Do we have any way to utilize 100% CPU?
I'm sorry but I don't understand the question. The full point of under-pressure is to prevent the server from overloading, keeping it responsive at all times.
The fact that your server "go" at 33% of CPU can mean all sorts of things. Usually you are exahusting some other limited resources, such as a connection pool or an OS buffer, or file descriptor limit.
If you remove under-pressure, could you get it to go 100%.
@meotimdihia Did you figure out the exact issue or external resource that is causing this? I am looking into the issue, expecting a hint to reproduce and fix it.
@deepaknf it was like as mcollina said " under-pressure is to prevent the server from overloading, keeping it responsive at all times." only. If the problem happens, you should optimize your code or upgrade your server.
I guess the original question was more about how to identify the routes that are causing issues, right?
@meotimdihia I was wondering to find the route, req has all the information about it as mcollina mentioned. Do you expect anything else in this issue?
@meotimdihia I was wondering to find the route, req has all the information about it as mcollina mentioned. Do you expect anything else in this issue?
I think fastify-under-pressure
doesn't give enough information for you to debug this.
You have to log the response time for routes. From my exp, it could be:
- Using APM tools like Datadog, New Relic, etc.. These can be quite expensive, so I don’t personally use them
- Find the custom solution yourself. Platformatic supported this: https://platformatic.dev/docs/guides/telemetry ( though I don’t use Platformatic currently, so it is the future solution)
- The issue that usually occurs can be addressed by optimizing database queries. You have to log slow queries in your database. I am using PostgreSQL => https://github.com/darold/pgbadger, which works fine for me in addressing common problems.