tkem/cachetools

[Question/Docs] How to pass a lock object when using cache classes?

Closed this issue · 1 comments

Hey @tkem

I have a very simple cache implementation:

class EventCache(object):
    def __init__(self):
        self.cache = TTLCache(maxsize=CACHE_MAX_SIZE, ttl=CACHE_TTL_SECONDS)

    def has(self, key: CacheKey):
        return self.cache.get(str(key)) is True

    def put(self, key: CacheKey):
        self.cache[str(key)] = True

I use this in a multiprocessing environment. I also ran into the issue of getting KeyError sometimes. I am now aware that I need to use a proper locking mechanism to prevent this from happening.
I went through the docs and I read many of the closed issues here and I all could find is:

  • how to pass a lock object to cached() decorator, which is NOT my case
  • suggestions to use locking but without any examples

I've been trying to figure this out for hours now, but I don't know how to pass/use locks with TTLCache.
Maybe my approach is wrong.

Any help would be appreciated!

I ended up solving it after all. This is what I did:

class AppCache:
    def __init__(self, cache: EventCache, lock: Lock):
        self._lock = lock
        self._cache = cache

    def cache_has(self, key: CacheKey):
        with self._lock:
            return self._cache.has(key)

    def cache_put(self, key: CacheKey):
        with self._lock:
            return self._cache.put(key)

and then in main

lock = Lock()

class EventCacheManager(BaseManager):
    pass


EventCacheManager.register('get_event_cache', EventCache)

cache_manager = EventCacheManager()
cache_manager.start()
event_cache = cache_manager.get_event_cache()

first_proc = multiprocessing.Process(
    target=start_proc,
    args=(AppCache(event_cache, lock),)
)
first_proc.start()
second_proc = multiprocessing.Process(
    target=start_proc,
    args=(AppCache(event_cache, lock),)
)
second_proc.start()

finally in start_proc

def start_proc(app_cache: AppCache):
  # do some work
  app_cache.cache_has(CacheKey('some-string'))