Connection Pooling doesn't work as intended
Ashish-Bansal opened this issue · 1 comments
Description
I found that connection pooling doesn't work in django-redis
package as intended if you use it as Cache Backend in Django.
Even if you configure it like -
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
# ...
"OPTIONS": {
"CONNECTION_POOL_KWARGS": {"max_connections": 100}
}
}
}
you would use a single connection for django.core.cache.cache
operations.
You would actually use pooling, only if you interact with the Connection Pool manually by requesting connections.
Intended behavior
If we configure django_redis as mentioned in docs, we expect it to use multiple connections to the redis. This would be especially required in the concurrent world.
Stack trace
Here's code references -
- Inject django_redis.cache.RedisCache into django
- Instantiate single instance of django_redis.client.DefaultClient and use it to implement django cache interface
- DefaultClient implements redis client operations
- Almost all operations call
get_client
to get low level redis client get_client
maintains list of redis connections, one for each mentioned server.- If connection doesn't exist,
get_client
would callconnect
onconnection_factory
available throughConnectionPool
to get a connection from the pool. Otherwise it reuses last created connection for that server.
I think pooling behaviour would be correct in Django's official implementation but Django's Redis backend may not be mature enough yet.
EDIT: Hm, I was wrong, it seems like pooling is handled at redis.client.Redis
level. So, the implementation logic is same as Django's Redis backend. django_redis
's implementation is correct!! I'll dig why connection pooling isn't working properly in my case.
Implementation is correct, it probably works as intended, my bad!