Dealing with: "invalid request, please check your server timestamp or recv_window param"
oom- opened this issue · 14 comments
After starting my robot I'm always getting this error after a while:
arg1: {ret_code: 10002, ret_msg: 'invalid request, please check your server timestamp or recv_window param', result: null, ext_code: '', ext_info: '', …}
ext_code: ''
ext_info: ''
rate_limit: 0
rate_limit_reset_ms: 0
rate_limit_status: 0
result: null
ret_code: 10002
ret_msg: 'invalid request, please check your server timestamp or recv_window param'
time_now: '1669045126.582107'
It solves by itself by retrying multiple times.
I suppose the time is sometime unsync ?
Your system time is used as part of the auth mechanism to generate the signature.
The server uses it to check that your request was sent less than x seconds ago (your recv_window). If your network is struggling or your system time is slightly out of sync, this can happen very often. More detailed explanation here:
https://github.com/tiagosiebler/awesome-crypto-examples/wiki/Timestamp-for-this-request-is-outside-of-the-recvWindow
Most of the time the fix is getting your system time updated through a time service, using either ntp in unix environments or something like atomic clock on windows. You can increase the default recv window to allow for more time, but it's better to do the time sync first.
@tiagosiebler Thank you for your answer.
I'm on windows and it's already set up to atomic time sync with time.windows.com
server.
The last synchronization appears to be the Mon Nov 21 2022 at 14:24
and the error happened at Mon Nov 21 2022 16:38:46
(and multiple calls was made before). So I don't really get how the value could deviate when it has apparently been synchronized.
Could it be because of parallel code execution?
I have 2 LinearClient
and 2 WebsocketClient
running simultaneously.
True, if you're already actively keeping your system clock in sync with a time service this should be less likely...
I don't think parallel execution should affect this. The timestamp is freshly taken per request at the time of the request.
Which version of the connector are you using? There used to be a time sync feature enabled by default. This was meant to reduce the chances of something like this happening, but I've disabled it by default a few versions ago as it seems to have caused timestamp/recvwindow errors sometimes from overestimating latency. If by any chance you haven't updated in a while, I would try updating to the latest version and seeing if you're noticing this happen.
If by any chance you've got this old mechanism turned on via the enable_time_sync: true
property, I would try without this feature.
Otherwise, I would keep an eye on various metrics relating to your system vs server time. Here's an example with the spot client, although the linearclient also exposes a server time api call:
https://github.com/tiagosiebler/awesome-crypto-examples/blob/master/src/exchanges/bybit/time/log-latency-spot.ts
The "diff" parts should be quite low (500ms or less). I wonder if these numbers are higher for you when you're seeing these timestamp errors?
Thank you for your fast answer.
I'm using: "bybit-api": "^3.0.2",
I just switched on the latest version "bybit-api": "^3.3.3",
.
I added a datetime
on every console.log
and console.error
to see if the time go off more than 5sec. + I will log the latency spot as well.
I'm going to let you know when I will have more data or if this is reproduced with the latest version.
Ok I just got it.
So there is the error at [21/11/2022 17:46:45]
and [21/11/2022 17:46:52]
:
The last previous request was at [21/11/2022 17:46:24]
And this is from the logging system you linked me (running in parallel):
And this error seems to happen only on the getWalletBalance
route. I have a button on my telegram bot to retrieve it and already faced this problem.
Pressing the button from my telegram bot to call the API 4 times in a row would often result in something like this:
- working
- error
- error
- working
And the telegram bot was on a Unix server with only this ability setup (so no other calls to the API).
Interesting, the end time is quite close, so is this a long delay from the getWalletBalance route?
I made an isolated test (just query the getWalletBalance
in a loop and I can see some spike to 1s sometimes:
Producing the serverTimeStartDiff
to raise up as well:
I started to log every request taking more than 1sec on my original project + running the latency
script on the side.
I'm still waiting for the scenario to reproduce.
But could it be that if one of the requests takes more than 5sec it would break the timesync ?
It shouldn't. If time sync was turned on, it makes its own request to the time endpoint at intervals, and checks the turnaround on that request. Unless your hanging get wallet balance request is also causing bybit to take longer than normal on the time request, even if it's a separate request started after...
But either way time sync is turned off by default now unless you pass enable_time_sync: true
in the rest client options.
Nope didn't pass any options :
this.client = new LinearClient({
key: this.account.bybitApiKey,
secret: this.account.bybitApiSecret
});
I just got the issue again on my main project where I refresh the getWalletBalance
every 5sec.
I'm logging all requests to the API that are taking more than 1s.
let start = Date.now();
let res = await this.client?.getWalletBalance();
let end = Date.now();
if (end - start > 1000){
console.log(`[TIME]: getWalletBalance: ${end - start}ms`);
}
Here is the output of the latency
script running in parallel (refresh every 500ms):
[25/11/2022 12:42:38] time: {
clientTimeReqStart: 1669376556966,
clientTimeReqEnd: 1669376558186,
clientTimeReqDiff: 1220,
serverTime: 1669376558132,
serverTimeStartDiff: 1166,
serverTimeEndDiff: 54
}
[25/11/2022 12:45:51] time: {
clientTimeReqStart: 1669376749625,
clientTimeReqEnd: 1669376751546,
clientTimeReqDiff: 1921,
serverTime: 1669376751351,
serverTimeStartDiff: 1726,
serverTimeEndDiff: 195
}
[25/11/2022 12:45:53] time: {
clientTimeReqStart: 1669376750135,
clientTimeReqEnd: 1669376753179,
clientTimeReqDiff: 3044,
serverTime: 1669376752949,
serverTimeStartDiff: 2814,
serverTimeEndDiff: 230
}
[25/11/2022 12:45:54] time: {
clientTimeReqStart: 1669376750647,
clientTimeReqEnd: 1669376754647,
clientTimeReqDiff: 4000,
serverTime: 1669376754427,
serverTimeStartDiff: 3780,
serverTimeEndDiff: 220
}
[25/11/2022 12:45:55] time: {
clientTimeReqStart: 1669376751147,
clientTimeReqEnd: 1669376755873,
clientTimeReqDiff: 4726,
serverTime: 1669376755781,
serverTimeStartDiff: 4634,
serverTimeEndDiff: 92
}
[25/11/2022 12:45:56] time: {
clientTimeReqStart: 1669376751648,
clientTimeReqEnd: 1669376756937,
clientTimeReqDiff: 5289,
serverTime: 1669376756837,
serverTimeStartDiff: 5189,
serverTimeEndDiff: 100
}
[25/11/2022 12:45:57] time: {
clientTimeReqStart: 1669376752161,
clientTimeReqEnd: 1669376757640,
clientTimeReqDiff: 5479,
serverTime: 1669376757525,
serverTimeStartDiff: 5364,
serverTimeEndDiff: 115
}
[25/11/2022 12:45:58] time: {
clientTimeReqStart: 1669376752673,
clientTimeReqEnd: 1669376758386,
clientTimeReqDiff: 5713,
serverTime: 1669376758312,
serverTimeStartDiff: 5639,
serverTimeEndDiff: 74
}
[25/11/2022 12:45:58] time: {
clientTimeReqStart: 1669376753178,
clientTimeReqEnd: 1669376758874,
clientTimeReqDiff: 5696,
serverTime: 1669376758727,
serverTimeStartDiff: 5549,
serverTimeEndDiff: 147
}
[25/11/2022 12:45:59] time: {
clientTimeReqStart: 1669376753684,
clientTimeReqEnd: 1669376759532,
clientTimeReqDiff: 5848,
serverTime: 1669376759245,
serverTimeStartDiff: 5561,
serverTimeEndDiff: 287
}
[25/11/2022 12:46:00] time: {
clientTimeReqStart: 1669376754198,
clientTimeReqEnd: 1669376760381,
clientTimeReqDiff: 6183,
serverTime: 1669376760181,
serverTimeStartDiff: 5983,
serverTimeEndDiff: 200
}
[25/11/2022 12:46:00] time: {
clientTimeReqStart: 1669376754713,
clientTimeReqEnd: 1669376760664,
clientTimeReqDiff: 5951,
serverTime: 1669376760599,
serverTimeStartDiff: 5886,
serverTimeEndDiff: 65
}
[25/11/2022 12:46:01] time: {
clientTimeReqStart: 1669376755213,
clientTimeReqEnd: 1669376761040,
clientTimeReqDiff: 5827,
serverTime: 1669376760910,
serverTimeStartDiff: 5697,
serverTimeEndDiff: 130
}
[25/11/2022 12:46:01] time: {
clientTimeReqStart: 1669376755713,
clientTimeReqEnd: 1669376761512,
clientTimeReqDiff: 5799,
serverTime: 1669376761414,
serverTimeStartDiff: 5701,
serverTimeEndDiff: 98
}
[25/11/2022 12:46:01] time: {
clientTimeReqStart: 1669376756227,
clientTimeReqEnd: 1669376761555,
clientTimeReqDiff: 5328,
serverTime: 1669376761489,
serverTimeStartDiff: 5262,
serverTimeEndDiff: 66
}
[25/11/2022 12:46:01] time: {
clientTimeReqStart: 1669376756742,
clientTimeReqEnd: 1669376761749,
clientTimeReqDiff: 5007,
serverTime: 1669376761711,
serverTimeStartDiff: 4969,
serverTimeEndDiff: 38
}
[25/11/2022 12:46:01] time: {
clientTimeReqStart: 1669376757257,
clientTimeReqEnd: 1669376761984,
clientTimeReqDiff: 4727,
serverTime: 1669376761901,
serverTimeStartDiff: 4644,
serverTimeEndDiff: 83
}
[25/11/2022 12:46:02] time: {
clientTimeReqStart: 1669376757768,
clientTimeReqEnd: 1669376762117,
clientTimeReqDiff: 4349,
serverTime: 1669376762042,
serverTimeStartDiff: 4274,
serverTimeEndDiff: 75
}
[25/11/2022 12:46:02] time: {
clientTimeReqStart: 1669376758269,
clientTimeReqEnd: 1669376762314,
clientTimeReqDiff: 4045,
serverTime: 1669376762244,
serverTimeStartDiff: 3975,
serverTimeEndDiff: 70
}
[25/11/2022 12:46:02] time: {
clientTimeReqStart: 1669376759297,
clientTimeReqEnd: 1669376762862,
clientTimeReqDiff: 3565,
serverTime: 1669376762707,
serverTimeStartDiff: 3410,
serverTimeEndDiff: 155
}
[25/11/2022 12:46:18] time: {
clientTimeReqStart: 1669376777194,
clientTimeReqEnd: 1669376778411,
clientTimeReqDiff: 1217,
serverTime: 1669376778378,
serverTimeStartDiff: 1184,
serverTimeEndDiff: 33
}
I experienced this issue.
The fix is easy, and it does have everything to do with the system clock.
It's not an error with the API.
@oom- commented that he had his system clock set to Atomically Sync to time.windows.com
Unfortunately, time.windows.com sucks.
Open up your Date/Time settings in Windows, and change your Internet Time sync to:
ptbtime1.ptb.de
This is an atomic clock time server of Physikalisch-Technische Bundesanstalt (PTB), a world-renowned research institution in Braunschweig, Germany.
As soon as I made this switch, every request executed seamlessly.
I'm going to pin this, because it's a very common problem to have (it's not necessarily obvious how important it is for your time to be set precisely) and there's a lot of useful suggestions here for anyone with the same problem.
I will also close this, but if you have other suggestions that helped you identify & fix what was causing this in your system, please keep contributing them. Thank you @JayWiseFree for your comment as well.
Sorry for the delay I have been really busy these past weeks.
@JayWiseFree I got the same issue on Unix system as well and 2 different VPS.
One of my colleagues made his own code and got the same error on: VPS (Unix) and his local computer (Windows11).
So here is was I have already done:
The APP lagging investigation
-
Made a NodeJs special proxy to handle all requests
my-app <--> bybitApi
, it's goal is to log the requests/responses with their payload + extract the timespan and recv_window from the URL and check it with the current time.
TheproxyJs
is bind on 127.0.0.1 so it will just be able to displayapp lagging
issues (no network issues as it will be instant anyway). So basically this setupmy-app <--> proxyJs (127.0.0.1) <--> bybitApi
. -
I started
my-app
, waiting for the error to pop up, looking at theproxyJs
logs (setup on a 500ms threshold).
Seeing that it works well because he detected a few little latencies (2/984 req was >500ms latency):
[08/01/2023 22:42:31.194]: (692ms) (08/01/2023 22:42:30.502) GET https://api.bybit.com/v2/private/wallet/balance?api_key=******×tamp=1673214150502&recv_window=5000&sign=******
===> HIGH LATENCY DETECTD: Time diff: (request)1673214150502ms - (proxy)1673214151194ms = (diff)692ms
But when the error happened, no latency was displayed on the proxyJs
app :
my-app
Terminal:
[08/01/2023 22:42:36] ===>CALLING https://api.bybit.com/v2/private/wallet/balance 1673214156211<=
[08/01/2023 22:42:41] ===>RESPONSE https://api.bybit.com/v2/private/wallet/balance 1673214161468<=
[08/01/2023 22:42:41] --------------ERROR--------------
[08/01/2023 22:42:41] <- Response content:
{
"date": "Sun, 08 Jan 2023 21:42:41 GMT",
"connection": "close",
"content-length": "315"
} {
"ret_code": 10002,
"ret_msg": "invalid request, please check your server timestamp or recv_window param. req_timestamp[1673214156211],server_timestamp[1673214161380],recv_window[5000]", "result": {},
"ext_code": "",
"ext_info": "",
"time_now": "1673214161.380055",
"rate_limit_status": 0,
"rate_limit": 0,
"rate_limit_reset_ms": 0
}
proxyJs
Terminal:
[08/01/2023 22:42:36.220]: (9ms) (08/01/2023 22:42:36.211) GET https://api.bybit.com/v2/private/wallet/balance?api_key=******×tamp=1673214156211&recv_window=5000&sign=******
-----> Conclusion 1: This error is not caused by app latency. <-----
The Network lagging investigation
- Recorded with Wireshark every request
my-app <---> wireshark <---> bybitApi
in direct and with the proxy in placemy-app <--> proxyJs <--> wireshark <--> bybitApi
. Figured out that sometimes the packed is not send on time.
Example of a good packet: (timespan = send time)
Example of a wrong packet: (timespan != send time)
-----> Conclusion 2: There is definitely something wrong, and it seems to be related to the network or the capacity of NodeJS to send the packet on the network. <-----
What's next ?
So basically there are multiple questions requiring an answer:
- Is Wireshark displaying the LocalTime based on the actual packet send or could it be some latency as well in the records?
- Does it come from the network? (If yes, how/why ? this doesn't make sense because it was reproduced on 5 different machines with their own network, all with fiber network).
- Does NodeJS would have somehow a pool for sending packets on the network that could bring some latency?
- Could it be a DNS resolution latency problem?
- Could it be a file descriptor limit problem? : https://stackoverflow.com/a/65084960
I'm now planning to measure the time of each part of the request (see below) and try to exclude the DNS resolution/lookup from the list above.
.
If it's not the DNS, I will rewrite my-app
in C# and retry again to see if it's NodeJS related.
I will let you know.
@tiagosiebler (I'm going to post here the advancements of the investigations if it's fine for you, maybe I will find something we could report to Bybit team.)
Appendice
my-app
code:
require('dotenv').config();
const dayjs = require("dayjs");
const { LinearClient } = require('bybit-api');
const fs = require("fs");
const instanceName = "test-wallet";
let originalLog = console.log;
console.log = (...args) => {
originalLog(`[${instanceName}]` + dayjs().format("[[]DD/MM/YYYY HH:mm:ss:SSS]"), ...args);
}
let originalError = console.error;
console.error = (...args) => {
originalError(`[${instanceName}]` + dayjs().format("[[]DD/MM/YYYY HH:mm:ss:SSS]"), ...args);
}
let history = [];
(async () => {
let client = new LinearClient({
key: process.env.API_KEY,
secret: process.env.API_SECRET
});
let lock = false; //simple mutex
setInterval(async () => {
if (!lock) {
lock = true;
let startDate = Date.now();
let res = await client.getWalletBalance();
let endDate = Date.now();
history.push({
type: 'time',
res,
limitStatus: res.rate_limit_status,
limitMax: res.rate_limit, time: {
clientTimeReqStart: startDate,
clientTimeReqEnd: endDate,
clientTimeReqDiff: endDate - startDate
}, short: dayjs().format("DD/MM/YYYY HH:mm:ss"),
timespan: Date.now()
});
if (res.ret_code == 10002) {
console.log("🚨 ERROR !");
history.push({
type: 'error',
res,
short: dayjs().format("DD/MM/YYYY HH:mm:ss"),
timespan: Date.now()
});
}
fs.writeFileSync(`./${instanceName}-getwallet-history.json`, JSON.stringify(history));
if (res.rate_limit_status <= 50) {
console.log(`🚨Actual limit stars to be OFF!: (${res.rate_limit_status}/${res.rate_limit})`);
}
if (res.rate_limit_status <= 0) {
console.log(`🚨Actual limit OFF!: (${res.rate_limit_status}/${res.rate_limit})`);
}
lock = false;
}
}, 2500);
})();
console.log("Started");
proxyJs
code:
const http = require('http');
const dayjs = require("dayjs");
const fetch = require("node-fetch");
const PORT = 8889;
//Create proxy server
http.createServer(async function (req, res) {
let now = Date.now();
/* ---------------------- ANAKYSE REQUEST TO SERVER HERE ---------------------- */
if (req.method == "GET") {
let params = {};
req.url.split("?").at(1)?.split("&")?.forEach(parameter => {
let pd = parameter.split("=");
params[pd.at(0)] = pd.at(1);
});
if (params.timestamp != null) {
params.timestamp = parseInt(params.timestamp);
console.log(`[${dayjs(now).format("DD/MM/YYYY HH:mm:ss.SSS")}]: (${now - params.timestamp}ms) (${dayjs(params.timestamp).format("DD/MM/YYYY HH:mm:ss.SSS")})`, req.method, req.url);
const MAX = 500;
if (now - params.timestamp >= MAX) {
console.log(`🔥 HIGH LATENCY DETECTD: Time diff: (request)${params.timestamp}ms - (proxy)${now}ms = (diff)${now - params.timestamp}ms`);
}
}
else{
console.log(`[${dayjs(now).format("DD/MM/YYYY HH:mm:ss.SSS")}]: `, req.method, req.url);
}
}
/* ---------------------- END ANAYLSE ---------------------- */
switch (req.method) {
case "GET":
let response = await fetch(req.url, { headers: req.headers });
let text = await response.json();
res.end(JSON.stringify(text));
break;
default:
res.end("Unsupported HTTP method")
break;
}
}).listen(PORT, "127.0.0.1");
console.log(`Basic proxy ready on port: ${PORT}`);
I found out why the error was showing up faster on my VPS (after a few minutes of uptime).
The DNS caching wasn't enabled and the DNS server that is using Bybit was just limiting my queries after a while... 😩
I'm starting to think that in the end all of this could be related to some Bybit limitations at some point even if it respects the "API limitation" maybe it's below, like the DNS server, or on duration period they lower the limitations, I don't know.
Going to rerun my-app
script (see previous comment) to see if it's still there.
Going to run the testing script again and see if the error shows up again...