Simple and effective redis cache for sequelize :)
Caching modules for sequelize are very complicated. I wanted something easy that implements the following basic strategy.
- Cache all findAll and findOne queries per model.
- Have model level TTL (time to live) values so different models can be cached differently.
- Automatically invalidate/purge caches whenever an update/insert/delete command is run on the model
- Use redis. Yes, because what else is there to use?
Install:
npm install --save seqache
or yar add seqache
.
Require and initialize.
let Cache = require('seqache');
let options = {
redis: myRedisInstance,
log: true,
keyPrefix: 'seqache',
maxSize: 1000,
};
let cache = new Cache(options);
Type: Object
Expects an instance of redis or ioredis
Defaults to ioredis with default redis port and database.
Type: Boolean|Function
A function used to print your logs or simply true
to enable default (console.log
) logging or false
to disable logging.
Default is false
Type: String
A string used to prefix all keys as they are cached.
Default is seqache
Type: Integer
Caches can balloon in size. Set maximum redis hash size.
Default is 1000
This global option can be overridden for each model.
Type: Integer
Total time to live. This determines how long a cache is kept. Value is in seconds as it uses redis.expire
method.
Default is 86,400.
This global option can be overridden for each model.
Seqache works by adding special cache methods to your sequelize models. To do so, you need to wrap
the models as shown below.
// sequelize initialization & other code
...
// our model
const User = sequelize.define('User', {
// Model attributes are defined here
firstName: {
type: DataTypes.STRING,
allowNull: false
},
lastName: {
type: DataTypes.STRING
// allowNull defaults to true
}
}, {
// Other model options go here
hooks : {...}
});
// wrap the model
cache.wrap(User, [ttl, maxSize]);
Wrapping a model adds the .cache
key to the model object. .cache
contains the convenience methods findOne
, findAll
and purge
, all of which return promises.
NOTE:
You can pass model level ttl
and maxSize
values while wrapping a model.
These will be respected and used for caching that model only!
To see how seqache improves your cached queries, simply do the following:
// findAll
User.cache.findAll().then(console.log).catch(console.error);
// findOne
User.cache.findOne().then(console.log).catch(console.error);
// in case you want to purge the cache for any reason
User.cache.purge().then(console.log).catch(console.error);
Wrapping your model adds extra afterUpdate
, afterDestroy
and afterCreate
hooks that are used to purge model caches.
As a result, you almost will never need to manually run .cache.purge()
as it is done automagically for you whenever the data changes.
These hooks do not override your other hooks so sequelize functions exactly as expected.
Also, under the hood, cache.findAll
and cache.FindOne
run the respective sequelize functions and pass all your intended parameters. No monkey patching or other shenanigans :)
When using paranoid tables, AfterDestroy
hooks are not triggered as expected. This issue has been extensively discussed Here. You will need to add individualHooks:true
to force the hook to fire. This is important because the hook is used to invalidate old caches. See example below.
User.destroy({
where: {
id: 1234,
},
individualHooks: true,
});
Overall, this is very good practice as even in associations, it escalates individual hooks across all affected models which in turn invalidates all related caches!
I need your support. Hit me up and let's make this a better module!