brandur/redis-cell

PR Accept for Resetting the Keys / Removing the Existing Keys

Opened this issue · 6 comments

Hi,
I have started using the module to integrate with our deployment. I am not 100 percent sure but so far from the Observed Behaviour, it seems that once the CL.THROTTLE commands gets executed the Key Entry is made, and subsequent CL.THROTTLE, use the entry to process the rate limit quota.
Working On A PR for the following, but wanted to discuss before hand if I have interpreted this correctly:-

1.Removing a Key Altogether
2.Resetting / Updating a key with new Rate Limit Tuple

Waiting for your response
Thanks
Romit

Hi Romit,

I am not 100 percent sure but so far from the Observed Behaviour, it seems that once the CL.THROTTLE commands gets executed the Key Entry is made, and subsequent CL.THROTTLE, use the entry to process the rate limit quota.

Yes sort of. The library uses a pretty novel algorithm called GCRA. What it ends up storing is a "TAT", or theoretical arrival time. Check out the implementation for full details, but it's pretty easy to see this at work for yourself.

# build
$ cargo build --release

# start Redis (extension may depend on your OS -- this is for Mac)
$ redis-server --loadmodule target/release/libredis_cell.dylib

Then from another console:

$ redis-cli

127.0.0.1:6379> cl.throttle user123 5 5 3600
1) (integer) 0
2) (integer) 6
3) (integer) 5
4) (integer) -1
5) (integer) 720

127.0.0.1:6379> keys '*'
1) "user123"

127.0.0.1:6379> get user123
"1559260935732768000"

1.Removing a Key Altogether

Only one value is stored, so it's easy to do with with the built-in DEL:

127.0.0.1:6379> del user123
(integer) 1

2.Resetting / Updating a key with new Rate Limit Tuple

If the intent here is to change a user's rate, I'd recommend the simpler option of just deleting their key and then start issuing new CL.THROTTLE commands with the new rate.

Thanks a Lot, that makes more sense and is much cleaner and simple.
I tried to Deploy Redis In Master and Slave Topology ( 1 Master and 2 Slaves), with the module.
I was able to execute whopping 40k rps per second on a SINGLE KEY (CL.throttle Invocation)
I was expecting the commands to be synced to the Slave nodes as well, but the Only command that ever synced was the DEL (Delete Key, when TTL Expired), (Used Monitor Command on Slaves to See the Stream of Incoming Events)
Is this by design, that the replication is disabled, or has it been disabled on the Prod Code.
My intent is to use the Slave as a Standby in Separate Fault Domain, since the Layer serves a critical Data Path.
In the above case, as soon as the switchover happens, the CL.THROTTLE, would start hitting the new Master (Slave which got Promoted to Master)
Just want to understand, why we do not sync Keys to the Slaves

Is this by design, that the replication is disabled, or has it been disabled on the Prod Code.

Hey @romitmisra1 ,

The results of CL.THROTTLE should be getting replicated out to replicas. You can see the specific code that causes this here, and the Redis Module API function that's getting invoked here. It tells a cluster to invoke the command as is on replicas with exactly the same args. This code was contributed to the project by a user who was using the library in clustered mode, so I would suspect that it works.

Can you double-check your setup and take a look at your logs on the master and each replica to make sure there's nothing that looks amiss? One failure case that I can think of, for example, would be in the redis-cell plugin was loaded on the master, but not on replicas (I'm not sure exactly what would happen in that case, but there's a decent chance it would result in the same symptoms that you're seeing).

If everything looks fine, if you come up with a basic set of commands to bring up a cluster with a similar configuration to yours, I'll dig in and see if I can find what's not working.

Thanks Brandur, I have already started debugging the code, and will get back with the detailed logs.
Will see if this is a config issue, or some edge case that I triggered, will keep you posted, and thanks for the reply

Sounds good! Thanks.

@romitmisra1: I hope that you are well. Could you give us an update on how this worked out for you?