Memory leak in Rediska
Closed this issue · 14 comments
I'm getting errors in log:
PHP Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 32 bytes) in Rediska/Connection.php on line 358.
Earlier I had memory_limit 128Mb and had the same issues.
Any recommendations?
Redis is mostly used for php session storage.
Here's info:
redis> info
redis_version:2.0.4
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
process_id:21529
uptime_in_seconds:170728
uptime_in_days:1
connected_clients:54
connected_slaves:0
blocked_clients:0
used_memory:1342753216
used_memory_human:1.25G
changes_since_last_save:12084
bgsave_in_progress:1
last_save_time:1289659666
bgrewriteaof_in_progress:0
total_connections_received:11565156
total_commands_processed:65579009
expired_keys:540808
hash_max_zipmap_entries:64
hash_max_zipmap_value:512
pubsub_channels:0
pubsub_patterns:0
vm_enabled:0
role:master
db0:keys=1255559,expires=898138
db12:keys=530010,expires=0
db13:keys=10,expires=0
Rediska uses default db0.
This caused that project works too long, apparently rediska is the reason.
P.S. sorry for non-stop spamming.
I've just realized that Rediska_Zend_Session_SaveHandler_Redis->gc tries to get all that bunch of thousands sessions. Is there better way to do gc?
We have a fix on this. I will publish it at monday.
Happy to hear that. For now we're using dirty fix that just returns true in gc.
Any news on this?
Sorry for the late answer.
We made a separate class for sessions handling (temporary), which is much more simpler than in Rediska. It uses native redis expire functionality, so garbage collection handling on redis side.
Ivan, maybe we need to enhance Rediska_Zend_Session_SaveHandler_Redis:
- We don't need to use sets for storing session keys
- Let Redis do the garbage collection job (expire)
0ctave, thanks.
I really cannot understand why should we use set for all the sessions.
Set used for getting all online users.
There's another ways to get online users count. At least, why do we need to grab ALL of this users to each scenario while collecting garbage? :) If I have millions online?
I think, the solution is to put garbage collection process on redis itself - it is native functionality.
+1 for letting redis handle gc. I think it's much more efficient.
PHP-based GC is a bottleneck in my different session handlers. Due to PHP's general script nature it's blocking too. ;-)
Btw, I checked in the session save handler:
in __construct
it uses session.gc_maxlifetime
-- then in write()
it uses Rediska::expire()
to set the objects TTL.
It looks like we don't need to fix much here:
- we disable
gc()
- we tell people to explicitly set
session.gc_maxlifetime
instead
We could do something like that:
if (ini_get('session.gc_maxlifetime') === null || ini_get('session.gc_maxlifetime') == 0) {
trigger_error('Please set session.gc_maxlifetime to enable garbage collection.', E_USER_WARNING);
}