Readthis is a drop in replacement for any ActiveSupport compliant cache. It emphasizes performance and simplicity and takes some cues from Dalli the popular Memcache client.
For new projects there isn't any reason to stick with Memcached. Redis is as fast, if not faster in many scenarios, and is far more likely to be used elsewhere in the stack. See this blog post for more details.
See Performance
Add this line to your application's Gemfile:
gem 'readthis'
gem 'hiredis' # Highly recommended
Use it the same way as any other ActiveSupport::Cache::Store. Within a Rails environment config:
config.cache_store = :readthis_store, {
expires_in: 2.weeks.to_i,
namespace: 'cache',
redis: { url: ENV.fetch('REDIS_URL'), driver: :hiredis }
}
Otherwise you can use it anywhere, without any reliance on ActiveSupport
:
require 'readthis'
cache = Readthis::Cache.new(
expires_in: 2.weeks.to_i,
redis: { url: ENV.fetch('REDIS_URL') }
)
You can also specify host
, port
, db
or any other valid Redis options. For
more details about connection options see in redis gem documentation
An isolated Redis instance that is only used for caching is ideal. Dedicated instances have numerous benefits like: more predictable performance, avoiding expires in favor of LRU, and tuning the persistence mechanism. See Optimizing Redis Usage for Caching for more details.
At the very least you'll want to use a specific database for caching. In the
event the database needs to be purged you can do so with a single clear
command, rather than finding all keys in a namespace and deleting them.
Appending a number between 0 and 15 will specify the redis database, which
defaults to 0. For example, using database 2:
REDIS_URL=redis://localhost:6379/2
Be sure to use an integer value when setting expiration time. The default
representation of ActiveSupport::Duration
values won't work when setting
expiration time, which will cause all keys to have -1
as the TTL. Expiration
values are always cast as an integer on write.
Compression can be enabled for all actions by passing the compress
flag. By
default all values greater than 1024k will be compressed automatically. If there
is any content has not been stored with compression, or perhaps was compressed
but is beneath the compression threshold, it will be passed through as is. This
means it is safe to enable or change compression with an existing cache. There
will be a decoding performance penalty in this case, but it should be minor.
config.cache_store = :readthis_store, {
compress: true,
compression_threshold: 2.kilobytes
}
Readthis uses Ruby's Marshal
module for serializing all values by default.
This isn't always the fastest option, and depending on your use case it may be
desirable to use a faster but less flexible serializer.
By default Readthis knows about 3 different serializers:
- Marshal
- JSON
- Passthrough
If all cached data can safely be represented as a string then use the pass-through serializer:
Readthis::Cache.new(marshal: Readthis::Passthrough)
You can introduce up to four additional serializers by configuring serializers
on the Readthis module. For example, if you wanted to use the extremely fast Oj
library for JSON serialization:
Readthis.serializers << Oj
# Freeze the serializers to ensure they aren't changed at runtime.
Readthis.serializers.freeze!
Readthis::Cache.new(marshal: Oj)
Be aware that the order in which you add serializers matters. Serializers are sticky and a flag is stored with each cached value. If you subsequently go to deserialize values and haven't configured the same serializers in the same order your application will raise errors.
Readthis supports all of standard cache methods except for the following:
cleanup
- Redis does this with TTL or LRU already.delete_matched
- You really don't want to perform key matching operations in Redis. They are linear time and only support basic globbing.
Like other ActiveSupport::Cache
implementations it is possible to cache nil
as a value. However, the fetch methods treat nil
values as a cache miss and
re-generate/re-cache the value. Caching nil
isn't recommended.
- Fork it
- Create your feature branch (
git checkout -b my-new-feature
) - Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin my-new-feature
) - Create new Pull Request