Loosen dependency on redis
davidkovsky opened this issue · 2 comments
At my work we use Verk but are losing infrastructure support for Redis. If it were possible to configure the key/value store used then we could keep using Verk. I believe Cachex or Mnesia would work for us.
It's only a couple of services. Timing is critical and I'm not familiar with this code base. I'll probably remove the dependency on Verk from those services. We have polyglot microservices with RabbitMQ already in heavy use so the plan is to use that.
I just wanted to post this as a suggestion for the future. I'm not sure if swappable k/v stores have been considered or how difficult it would be.
I would love to have swappable storage engines BUT it comes with a reasonably high price.
The problem with swappable k/v stores is that they would have completely different guarantees. We would need to ensure that the same guarantees are present. These are the main reasons why we picked Redis as datastore:
- Extremely fast as it uses in-memory data structures;
- Atomic changes to the data structures so that we ALWAYS execute a job at least once;
- Lua scripting is possible allowing the combination of different data structure manipulations that are still viewed as atomic.
- Supported by a fair amount of cloud providers. You just connect to a database that AWS, Google Cloud or Azure maintains.
Also there are a few features that would be really hard to implement using queueing systems (like RabbitMQ):
- How to schedule jobs in the future when the only structure you have is a queue?
- How to not retry jobs instantly? How to retry jobs in the future with exponential backoff?
Mnesia is a bit more complicated to maintain the state for example. Now your application will have "state" to move with. So if you were planning to use Docker containers to run on a stateless environment then you need to rethink this and ensure that the state follows your containers etc.
I hope this helps explain why we picked Redis as our datastore.
Thanks. I appreciate the explanation.