Server error with cache middleware, Memcached and lots of URLs
davidwtbuxton opened this issue · 3 comments
Hi,
We hit intermittent 500 server errors on a site that uses the cache middleware and the memcached back-end.
- Django 4.2
- Wagtail 5.1.3
- wagtail-cache 2.3.0
- pymemcached 4.0.0
The error is pymemcache.exceptions.MemcacheServerError: b'object too large for cache'
.
I think the error is caused by the UpdateCacheMiddleware using a single "keyring" dict to track all URLs, and setting / getting that in the cache backend. Memcached defaults to a maximum 1 megabyte size for a value. When the keyring grows too large, it hits the cache value size limit, and the memcached server returns an error.
ERROR django.request:log.py:241 Internal Server Error: /views/template-response-view/
Traceback (most recent call last):
File "/Users/david/wagtail-cache/.venv/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/david/wagtail-cache/.venv/lib/python3.11/site-packages/django/utils/deprecation.py", line 136, in __call__
response = self.process_response(request, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/david/wagtail-cache/wagtailcache/cache.py", line 313, in process_response
self._wagcache.set("keyring", keyring)
File "/Users/david/wagtail-cache/.venv/lib/python3.11/site-packages/django/core/cache/backends/memcached.py", line 79, in set
if not self._cache.set(key, value, self.get_backend_timeout(timeout)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/david/wagtail-cache/.venv/lib/python3.11/site-packages/pymemcache/client/hash.py", line 344, in set
return self._run_cmd("set", key, False, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/david/wagtail-cache/.venv/lib/python3.11/site-packages/pymemcache/client/hash.py", line 322, in _run_cmd
return self._safely_run_func(client, func, default_val, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/david/wagtail-cache/.venv/lib/python3.11/site-packages/pymemcache/client/hash.py", line 211, in _safely_run_func
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/david/wagtail-cache/.venv/lib/python3.11/site-packages/pymemcache/client/base.py", line 475, in set
return self._store_cmd(b"set", {key: value}, expire, noreply, flags=flags)[key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/david/wagtail-cache/.venv/lib/python3.11/site-packages/pymemcache/client/base.py", line 1247, in _store_cmd
self._raise_errors(line, name)
File "/Users/david/wagtail-cache/.venv/lib/python3.11/site-packages/pymemcache/client/base.py", line 1042, in _raise_errors
raise MemcacheServerError(error)
pymemcache.exceptions.MemcacheServerError: b'object too large for cache'
Here's a test that passes when the error happens:
You need to run a memcached server when running the test, which I did with docker run --rm -it -p 11211:11211 memcached -v
.
I'm not sure what would be a good fix. Certainly I reckon that the memcached server error shouldn't cause the website to give a 500 error.
Thanks,
David
Oh! And thanks for developing wagtail-cache.
Interesting issue... yes that is definitely a concern. I'm not at all familiar with memcached, but my instinct is that there must be a setting for allowing larger values.
At any rate, we should try/except such errors when updating the cache, as a short-term solution.
@davidwtbuxton would you be able to test out the errors
branch, and confirm it fixes your issue?
pip install https://codeload.github.com/coderedcorp/wagtail-cache/zip/refs/heads/errors