Pickling the queue
hashbangCoder opened this issue · 1 comments
hashbangCoder commented
I've implemented DQL for Flappy Bird in Keras and I find that pickling more than 50000 experiences takes over 11GB of storage due to the inefficiency of pickle (or cPickle for that matter), while the actual size of queue is around 5 Gigs using sys.getsizeof() (there is no better alternative to get size of python objects)
Did you face this issue? I would imagine using a database like sqlite should be more efficient.
hashbangCoder commented
NVM. Tried it out with SQlite and the size grew even bigger. Only way around I guess is to change size of numpy float and int.