Multi-execs from concurrent users not working
smartinio opened this issue · 7 comments
The following code snippet works towards actual Upstash but not towards serverless-redis-http
Repro
import { Redis } from '@upstash/redis'
const redis = Redis.fromEnv()
await Promise.all(['a', 'b', 'c'].map(async (key) => {
const multi = redis.multi()
multi.set(key, 'foo')
await multi.exec()
}))
console.log(await redis.mget('a', 'b', 'c'))
Expected
[ "foo", "foo", "foo" ]
Observed
UpstashError: ERR MULTI calls can not be nested, command was: [["set","b","foo"]]
at new u (/Users/sam/redis-http-repro/node_modules/@upstash/redis/chunk-DKKYFKP2.mjs:1:42)
at /Users/sam/redis-http-repro/node_modules/@upstash/redis/chunk-DKKYFKP2.mjs:1:1597
Is this output directly from that repro? The command doesn't seem to line up
Sorry, old output from when I tested with options. Updated
Investigating now.
Ok @smartinio - I have reproduced this and figured out what is going on.
When you run a multi, SRH borrows a redis client for the duration of the commands. However, when you have enough commands, you exhaust the pool, and there is a flaw in the logic whereby instead of queueing, it wraps back around and allocates a redis client that is already mid-way through a multi exec. Hence why it complains that you cannot nest a multi-exec.
I have a solution for you that will work now though. In my examples, I specify setting max_connections
to 3
, which is fine in most cases. However, when you run a bunch of multi execs in parallel... you can see where this goes wrong. Redis supports thousands of connections, so as a temporary solution before I improve the pooling code:
Set the max_connections
to a large number, e.g. 50
.
if you configure SRH by a config file, that would look like
{
"example_token": {
"srh_id": "some_unique_identifier",
"connection_string": "redis://localhost:6379",
"max_connections": 50
}
}
or if you've configured SRH via env mode, SRH_MAX_CONNECTIONS
Let me know if this solves your problem in the meantime, and when I have some free time I will improve the pooling logic to solve for this case.
Thanks for the quick debug. Will give it a go
That workaround allowed me to complete my integration testing suite and finally deploy my project. Thanks again!
All good! I will work on a more concrete solution when I get time, but I'm happy to hear this temporarily solution is working.
Leaving this issue open as a reference point for when I get around to improving the pooling, and for others visibility.