lonelyenvoy/python-memoization

Caching is not working as expected when both max_size and TTL are used

prathyushark opened this issue · 3 comments

Hi,
I am trying to use both max_size and ttl for my cache, but I see that once the cached element is evicted from the cache after the ttl, it is not being cached anymore. Using @cached(ttl=5) works as expected, the element is evicted after 5 seconds and after the next subsequent call, it is cached and retrieved from the cache for the next 5 seconds. But when I use @cached(max_size=5, ttl=5), after the element is evicted it does not cache the subsequent calls and all calls after that are hitting the function instead.
For example, refer to the below code snippet:

from memoization import cached
import time

# @cached(ttl=5)  # works as expected
@cached(max_size=5, ttl=5)  # does not cache after ttl
def testing_cache(x):
    print("not cached")
    return x


while True:
    print(testing_cache(5))
    print(testing_cache.cache_info())
    time.sleep(1)

Here is a test case, to make it easier:

import unittest
from memoization import cached, CachingAlgorithmFlag, _memoization
import random
from threading import Lock
import time


make_key = _memoization._make_key   # bind make_key function
exec_times = {}                     # executed time of each tested function
lock = Lock()                       # for multi-threading tests
random.seed(100)                    # set seed to ensure that test results are reproducible

for i in range(1, 3):
    exec_times['f' + str(i)] = 0  # init to zero


@cached(max_size=5, algorithm=CachingAlgorithmFlag.FIFO, thread_safe=False, ttl=0.5)
def f1(x):
    exec_times['f1'] += 1
    return x


@cached(ttl=0.5)
def f2(x):
    exec_times['f2'] += 1
    return x


class TestMemoization(unittest.TestCase):
    # this test fails
    def test_maxsize_TTL(self):
        self._general_ttl_test(f1)

    # this test passes
    def test_ttl_only(self):
        self._general_ttl_test(f2)

    def _general_ttl_test(self, tested_function):
        # clear
        exec_times[tested_function.__name__] = 0
        tested_function.cache_clear()

        arg = 1
        key = make_key((arg,), None)
        tested_function(arg)
        time.sleep(0.25)  # wait for a short time

        info = tested_function.cache_info()
        self.assertEqual(info.hits, 0)
        self.assertEqual(info.misses, 1)
        self.assertEqual(info.current_size, 1)
        self.assertIn(key, tested_function._cache)

        tested_function(arg)  # this WILL NOT call the tested function

        info = tested_function.cache_info()
        self.assertEqual(info.hits, 1)
        self.assertEqual(info.misses, 1)
        self.assertEqual(info.current_size, 1)
        self.assertIn(key, tested_function._cache)
        self.assertEqual(exec_times[tested_function.__name__], 1)

        time.sleep(0.35)  # wait until the cache expires

        info = tested_function.cache_info()
        self.assertEqual(info.current_size, 1)

        tested_function(arg)  # this WILL call the tested function

        info = tested_function.cache_info()
        self.assertEqual(info.hits, 1)
        self.assertEqual(info.misses, 2)
        self.assertEqual(info.current_size, 1)
        self.assertIn(key, tested_function._cache)
        self.assertEqual(exec_times[tested_function.__name__], 2)

        # The previous call should have been cached, so it must not call the function again
        info = tested_function.cache_info()
        self.assertEqual(info.current_size, 1)

        tested_function(arg)  # this SHOULD NOT call the tested function

        info = tested_function.cache_info()
        self.assertEqual(info.hits, 2)  # FAILS
        self.assertEqual(info.misses, 2)  # FAILS
        self.assertEqual(info.current_size, 1)
        self.assertIn(key, tested_function._cache)
        self.assertEqual(exec_times[tested_function.__name__], 2)


if __name__ == '__main__':
    unittest.main()

Hi Prathyusha,

Thanks for your interest!

This was a bug. If both max_size and ttl are used, when cache expires, something wrong happens. When passing the same argument as before to a cached user function, neither does my previous code update the cache, nor does it use the data stored in the cache any more. This is due to some incorrect logics when dealing with multithreading scenarios.

Resolved in v0.2.2. Now available at PyPI or GitHub Releases.

Thank you for approving this and releasing the latest version.