IlyaSkriblovsky/txredisapi

Lazy connection pool - pipeline issue

irshad-qb opened this issue · 4 comments

txredisapi ( trying the master branch which has the fix for pipeline pool) hangs when there are greater number of concurrent pipeline commands executed than the maximum connection pool size.

How to reproduce :
1- Create lazyConnectin pool with pool size 2
2- Execute any more than 10 pipeline requests
Simple sample code to reproduce above is added below. The code should output all the 100 requests but it outputs only around 60.

import txredisapi as redis

from twisted.internet import defer
from twisted.internet import reactor
def print_result(result,count=0):
    print "got the result for count",count

def process(pipeline):
    pipeline.smembers('key1')
    return pipeline.execute_pipeline()

@defer.inlineCallbacks
def main():

    rc =  redis.lazyConnectionPool(poolsize=1)

    for i in range(100):
        rc.pipeline().addCallback(process).addCallback(print_result,count = i)
    yield rc.disconnect()


if __name__ == "__main__":
    main()
    reactor.run()

Hmmm, that's odd. I was not able to reproduce this on my machine (using the HEAD of master on fiorix/txredisapi.

I tried setting the poolsize to 1, 2, and 10, having an empty set at key1 or a set with several elements, but each time all 100 requests completed successfully. Can you supply additional details about your test setup?

The deferedQueue (https://github.com/fiorix/txredisapi/blob/3834aeee670720c807c0de9a220a76aa984a303e/txredisapi.py#L2051) should block if we try to get a connection and none are available (https://github.com/fiorix/txredisapi/blob/3834aeee670720c807c0de9a220a76aa984a303e/txredisapi.py#L2104-L2116).

@steiza I was testing my test case with 1.3 branch code of tx-redis api which do not have the pool fix you did. Then I thought this would be an issue with the pipeline - pool combination.
Actually the issue I mentioned was with my application code. There were some connections being requested while redis pipeline was created but not finished. This was creating the block ( Dead lock like scenarion while waiting for connection from pool ) scenario you mentioned in above comment when there are more number of concurrent users. I fixed it by avoiding code which uses more than one connection concurrently. Thanks alot for helping me to find the root cause and apologies for wasting your time with my test case.
Please close this issue. Seems I don’t have the privilege .

I also witness this behavior in my applications when I have a burst of redis activity, it causes pauses in the rest of my program. I did not expect such behavior. I am still investigating it.

We're running txredisapi version 1.4.3

I believe that pipeline() is mostly useless. Thanks to async nature of txredisapi you can issue as many parallel requests as you like even with a single connection and they will be pipelined automatically. There won't be any perfomance gain from batching requests together with pipeline() (or maybe it will be very small).

This doesn't excuses deadlocks though. And if the problem is really in txredisapi code, it may affect watch()/multi() too because implementation is somewhat similar.