DoctorMcKay/node-steam-tradeoffer-manager

Polling Overloading

JustAzul opened this issue · 5 comments

Logs

image

Active Requests Screenshot

image

The event loop gets overload during poll updates, when it executes full updates, the overload is bigger.

First polling: 35k requests
Second and so on: 10~15k requests

Update 2

After some debugging, I've got some results.

PM2 Active Requests

image

Debug Screenshots
Code

image

Logs

image

Update 3

After several tests, I've found the source of this issue.

the problem is on I/O bound operations, mainly used when our assetCacheMaxItems does not have a big number, making the node process to execute a HUGE number of I/O operations all at once, freezing the entire event loop.

Possible solutions:

There's no synchronous I/O in tradeoffermanager, and Helpers.createOfferFromData doesn't do any I/O at all. It sounds like you have an issue somewhere else that's blocking your event loop.

If you're operating at a massive scale, you probably want to disable asset description retrieval by omitting the language option in the constructor and handle mapping item ids to descriptions yourself elsewhere. That eliminates the need for either memory capacity or disk I/O to cache the descriptions.

There's no synchronous I/O in tradeoffermanager, and Helpers.createOfferFromData doesn't do any I/O at all. It sounds like you have an issue somewhere else that's blocking your event loop.

If you're operating at a massive scale, you probably want to disable asset description retrieval by omitting the language option in the constructor and handle mapping item ids to descriptions yourself elsewhere. That eliminates the need for either memory capacity or disk I/O to cache the descriptions.

So, why when i use this:

assetCacheMaxItems: Number.MAX_SAFE_INTEGER

The event loop stop freezing?

to set a big number on assetCacheMaxItems, makes the read, save, JSON.parse/stringify operations to be completely ignored, right?

Please correct me if i'm wrong, i'm here to learn(and help fix issues, if any) :)

It does seem like if you try to read 30,000 files there's some event loop blockage, but I haven't been able to reproduce more than a few hundred ms of delay.

Test code

It's not the cleanest, I know

const FS = require('fs');

let filenames = FS.readdirSync(__dirname + '/.local/share/node-steam-tradeoffer-manager');
console.log(filenames.length);

let filesToOpen = [];
for (let i = 0; i < 30000; i++) {
        filesToOpen.push(filenames[Math.floor(Math.random() * filenames.length)]);
}

let opened = 0;
let last = Date.now();
setInterval(() => {
        let now = Date.now();
        let diff = now - last;
        last = now;
        console.log(`${diff} - opened ${opened}`);
}, 100);

for (let i = 0; i < filesToOpen.length; i++) {
        FS.readFile(__dirname + '/.local/share/node-steam-tradeoffer-manager/' + filesToOpen[i], (err, file) => {
                if (err) throw err;
                JSON.parse(file.toString('utf8'));
                opened++;
        });
}

I ran this test code to see how much opening 30k random files from my disk asset cache would slow down the event loop.

Output
320322 // directory contains 320k files
117 - opened 0
235 - opened 0
223 - opened 0
99 - opened 0
100 - opened 0
100 - opened 0
100 - opened 0
100 - opened 0
100 - opened 0
100 - opened 0
100 - opened 0
100 - opened 0
128 - opened 716
487 - opened 30000
101 - opened 30000
100 - opened 30000

So the worst blockage I experienced was 387 milliseconds. It's probably worth mentioning that this is on an SSD.

Some of this could possibly be mitigated by enqueueing disk I/O to ensure we don't try to read too many files from disk at once.

I will try to isolate everything and run tests to see whats wrong, in the mean time, #324 would be a nice first step

Update file-manager to v2.0.1 and you should see some marked improvement in I/O performance due to limiting concurrent I/O operations.