Derived from the Latin word "cella," meaning "storage"
Celli is a versatile library designed for caching and memoization in various runtime environments. It provides two primary functionalities:
-
Cache Creation and Management:
- Offers flexible ways to create and manage caches
- Provides utils to create a custom cache in a composable manner
-
Memoization Tools:
- Offers utilities for function memoization
- Provides decorators for easy caching of class methods
The library is designed to be flexible and extensible, allowing developers to choose the most appropriate caching strategy based on their specific needs. It comes without any dependencies (weights only 15kb!), perfectly typed and has 100% test coverage. Ensuring high quality and reliability.
npm install celli
The main goal of the library is caching functions without effort. We offer utilities that wrap up most of the API in order to provide easy and quick memoization.
import {Cache} from 'celli'
class SomeService {
@Cache({
cacheBy: (userId) => userId,
async: true
ttl: 1000,
lru: 100
})
async getUserSecret(userId: string) {
return await fetch(`https://some.api/user/${userId}/secret`)
}
}
This code will create a cache behind the scenes and will memoize getUserSecret method. We can specify parameters as the following:
cacheBy
- function that will be used to determine the cache keyasync
- if true, the cache will necessarily be asynchronous and will enforce async concurrency and cache promisesttl
- time to live for each itemlru
- maximum number of items in the cache, it supportsgetItemSize
to allow dynamic allocations.dispose
- function that will be called for a deleted itemeffects
- array of effects that will be applied to each item
But wait! A runtime application needs to manage its resources dynamically. What if we have different caches or we want to use a cache of our own?
Let's examine such a case with an alternative API:
import {createCache, Cache} from 'celli'
const cache = createCache({
ttl: 1000,
lru: {
maxSize: 100,
getItemSize: (item) => 1
}
})
const userContext = {
cacheRef: cache
}
// No need to specify the cache, we will specify where to get it from
class SomeService {
@Cache({
cacheBy: (userId) => userId,
from: (context) => context.cacheRef
})
static getUserSecret(context: typeof userContext, userId: string) {
return fetch(`https://some.api/user/${userId}/secret`)
}
}
Using this from
option will specify where our cache is coming from. Then, each function we'd cache this way will be memoized separately for each cache reference! Allowing us the flexibility to store caches within the application.
Creating a cache instance is quite simple, and easily done by the createCache
utility.
import {createCache} from 'celli'
// This is a simple synchronous cache:
const cache = createCache()
// This will produce an async cache:
const asyncCache = createCache({async: true})
// This will produce an LRU cache with 100 items:
const lruCache = createCache({lru: 100})
// This will produce a TTL cache with 1000ms ttl
const ttlCache = createCache({ttl: 1000})
// This will produce a cache that enforces a lifecycle for items:
const lifecycleCache = createCache({
effects: [
({getSelf, deleteSelf, onRead}) => {
// This code will run once when the item is set
return () => {
// This code will run once when the item is deleted
}
}
]
})
// This will produce a cache that will call dispose function when the item is deleted
const cacheWithDispose = createCache({
dispose: (client) => {
// This code will run once when the item is deleted
client.disconnect()
}
})
// This will create a cache that leans on another cache for data.
// It's useful if we want to use services like redis to hold a bigger cache than our own, while we consume a portion of it for the application.
const cacheWithRemote = createCache({
lru: 100,
source: anotherCacheFromAnotherService
})
And of course, we can combine all of these options together.
Each cache implements a similar API to Map.
We have some simple methods such as set
, get
, delete
and has
.
The cache is also iterable, if we need to iterate over its values, keys or entries.
In addition, we get a special clean()
method.
This method will not only clear the cache from all its values, but it will also wait for every cleanup operation if there are any.
This is important for freeing resources and for graceful shutdowns.
For making things simpler, we also expose a global clean()
method for all the top-level memoization.
It will clean every memoized function created with @Cache
decorator, designed to cache shutdowns:
import {clean} from 'celli'
process.on('SIGTERM', () => {
clean()
})
However, it will not clean custom caches, which are the application's responsibility.
Every application has different needs. While it's good practice to configure each cache with LRU and TTL to avoid memory leaks, your application may require its own custom behavior.
For this purpose, we provide a set of utilities to compose caches together.
import {cache, lru, async, lifeCycle, effects, remote, compose} from 'celli'
const baseCache = cache() // This is a simple synchronous cache
const asyncCache = async()(baseCache) // This will produce an async cache, on top of our base cache
const lruCache = lru({maxSize: 100})(asyncCache) // This will produce an LRU cache with 100 items
const ttlCache = ttl({timeout: 1000})(lruCache) // This will produce a TTL cache with 1000ms ttl
const lifecycleCache = lifeCycle()(ttlCache) // This will produce a cache with lifecycle
const effectsCache = effects([...effects])(lifecycleCache) // This will produce a cache with effects
const remoteCache = remote(anotherCacheFromAnotherService)(effectsCache) // This will produce a cache with remote-backup
As you can see, each cache can use another cache to enforce its logic and strategy. Each strategy is exposed as a high-order function that can wrap around our base cache.
When putting everything together, we get:
import {compose, lru, ttl, lifeCycle, async, effects, remote} from 'celli'
const ultimateCache = compose(
lru(100),
ttl(1000),
lifeCycle(),
async(),
effects([...effects]),
remote(anotherCacheFromAnotherService)
)(baseCache)
Cache instances emit events that you can subscribe to:
cache.on('get', (key) => {
console.log('set', key)
})
cache.on('set', (key, value) => {
console.log('set', key, value)
})
cache.on('delete', (key) => {
console.log('delete', key)
})
cache.on('clean', () => {
console.log('clean')
})
Each function will return a callback to unsubscribe from the event:
const unsubscribe = cache.on('get', (key) => {
console.log('set', key)
})
unsubscribe()
As mentioned, we may want to utilize larger caches in other services, such as Redis.
We can achieve this behavior using the remote
and source
features.
Since Redis itself is not a cache implementation, we need to design an interface for it.
This is where the source
cache comes in:
import {source} from 'celli'
const sourceCache = source({
get: async (key) => {
return await fetch(`https://some.api/data/${key}`)
},
set: async (key, value) => {
return await fetch(`https://some.api/data/${key}`, {
method: 'POST',
body: value
})
}
})
This utility helps create an AsyncCache
that works as a proxy. You can then use this cache as a source for another cache.
Creating a source could happen in two ways. We could either provide a set
method or not.
Providing a set
method will make this cache a proxy. It will not save any data by itself, but it will forward the data to the source cache.
If we don't provide a set
method, having only get
applied, this cache will act as an AsyncCache
that will save the data and will use get
to "introduce" new items if they are requested.
Creates a basic cache instance.
const baseCache = cache()
cache.set('key', 'my-data')
cache.get('key') // 'my-data'
Creates a source cache instance, for external loading of data.
const sourceCache = source({
get: async (key) => {
return await fetch(`https://some.api/data/${key}`)
}
})
const externalSourceCache = source({
get: async (key) => {
return await fetch(`https://some.api/data/${key}`)
},
set: async (key, value) => {
return await fetch(`https://some.api/data/${key}`, {
method: 'POST',
body: value
})
},
has: async (key) => {
return !!(await fetch(`https://some.api/data/${key}`))
}
})
A global cleanup method for all the top-level memoization.
process.on('SIGTERM', async () => {
await clean()
})
Applies Least Recently Used (LRU) caching strategy.
const baseCache = cache()
const lruCache = lru({
maxSize: 100,
getItemSize: (item) => 1 // Optional, for dynamic allocation
})(baseCache)
lruCache.set('key', 'my-data')
lruCache.get('key') // 'my-data'
It will enforce its logic seemlessly on an async cache as well:
const asyncCache = async()(baseCache)
const lruAsyncCache = lru({
maxSize: 100,
getItemSize: (item) => 1 // Optional, for dynamic allocation
})(asyncCache)
await lruAsyncCache.set('key', 'my-data')
await lruAsyncCache.get('key') // 'my-data'
Enforces async concurrency for the cache, while also caching its promises. This is recommended as a top-layer for the cache to ensure a stable usage by the application.
const baseCache = cache()
const asyncCache = async()(baseCache)
await asyncCache.set('key', 'my-data')
await asyncCache.get('key') // 'my-data'
const promise1 = asyncCache.get('key1')
const promise2 = asyncCache.get('key2')
console.log(promise1 === promise2) // true
Applies lifecycle to the cache items. This HOF extends the cache API and allows us to set effects when setting new items.
const baseCache = cache()
const lifecycleCache = lifeCycle()(baseCache)
lifecycleCache.set('key', 'my-data', [
// Effect that will log on every read and delete
({onRead}) => {
onRead(() => {
console.log('log: onRead')
})
return () => {
console.log('log: deleted')
}
}
])
lifecycleCache.get('key') // 'my-data'
// "log: onRead"
lifecycleCache.delete('key')
// "deleted"
Applies an array of effects to the cache.
This mechanism is identical to lifeCycle
, but it sets a constant list of effects on all items and doesn't allow the flexability of effects-per-item.
const baseCache = cache()
const effectsCache = effects([
// Effect that will log on every read and delete
({onRead}) => {
onRead(() => {
console.log('log: onRead')
})
return () => {
console.log('log: deleted')
}
}
])(baseCache)
effectsCache.set('key', 'my-data') // The set() is a normal set(), we don't get the extra parameter for effects.
effectsCache.get('key') // 'my-data'
// "log: onRead"
effectsCache.delete('key')
// "log: deleted"
Creates a cache with a remote backup. This is useful if we want to use services like redis to hold a bigger cache than our own, while we consume a portion of it for the application.
const sourceCache = source({
get: async (key) => {
return await fetch(`https://some.api/data/${key}`)
}
})
const appCache = lru({maxSize: 100})(baseCache)
const appCacheWithRemote = remote(sourceCache)(appCache)
This backup strategy comes with some configurations as well:
const appCacheWithRemote = remote(sourceCache, {
deleteFromSource: false, // When a value is deleted from the cache - don't delete it from the source
cleanupPolicy: CleanupPolicies.NONE // When the cache is cleaned - don't try to clean the source cache
})(appCache)
In terms of CleanupPolicies
, we have three options:
ALL
- When the cache is cleaned, also clean the source cacheNONE
- When the cache is cleaned, don't try to clean the source cache at allKEYS
- When the cache is cleaned, only clean the keys that are present in the local front-cache
Memoizes a function.
const memoizedFunction = memo((a: number, b: number) => a + b)
memoizedFunction(1, 2) // 3
memoizedFunction(1, 2) // 3, but didn't run the function again
memoizedFunction.clean() // This will clear the cache for this function
memoizedFunction(1, 2) // 3, and the function did run again
The memo function supports a third parameter, which could be a cache instance. If we don't provide one, it will create a new one.
Memo function works for async functions as well and will cache promises.
Decorator for caching class methods.
This decorator expects either a cache options, or a function that will provide a cache instance from the function's arguments.
If we want to create a new cache for a specific function, we will provide cache-options (same API as createCache
) + an optional cacheBy
to calculate the key.
import {Cache} from 'celli'
class SomeService {
@Cache({
cacheBy: (userId) => userId,
async: true
ttl: 1000,
lru: 100
})
async getUserSecret(userId: string) {
return await fetch(`https://some.api/user/${userId}/secret`)
}
}
Otherwise, we will provide a function that will receive the function's arguments and will extract a cache instance from there.
import {createCache} from 'celli'
const cache = createCache({
ttl: 1000,
lru: {
maxSize: 100,
getItemSize: (item) => 1
}
})
const userContext = {
cacheRef: cache
}
// No need to specify the cache, we will specify where to get it from
class SomeService {
@Cache({
cacheBy: (userId) => userId,
from: (context) => context.cacheRef
})
static getUserSecret(context: typeof userContext, userId: string) {
return fetch(`https://some.api/user/${userId}/secret`)
}
}
Important: be careful not to create a cache reference from this from
callback!
Not only will we not get any memoization (every call uses a different cache), but we'll also consume a lot of memory.
Ensures a function is only called once. It works for async functions as well, caching its promise too. This is not recommended if the function accepts arguments, as it will cache the result based on the function identity.
const getCache = once(() => createCache()) // cache instance
const cache1 = getCache() // cache
const cache2 = getCache() // cache
console.log(cache1 === cache2) // true
Composes multiple functions into a single function. It's not that related to the library, but it's a useful utility, especially if we want to combine caches. This is a pretty common implementation, nothing special here.
Enum for source cleanup policies in remote caches.
Interface for the basic cache structure.
Interface for the async cache structure.
Union type for both ICache
, AsyncCache
and all the other wrapped strategies.
Will infer the key type of any cache.
Will infer the value type of any cache.