joanvila/aioredlock

aioredlock very fast error. Is this a bug?

centosredhat opened this issue · 7 comments

import  asyncio
import  aioredis
from  aioredlock import Aioredlock, LockError, Sentinel

redis_instances = [('127.0.0.1', 6379)]

async def test():
    await asyncio.sleep(0.1)
    return 'test'

async def get_lock():
    try:
        lock_manager = Aioredlock(redis_instances)
        async with await lock_manager.lock("resource_name") as lock:
            print(lock.valid)
            result = await test()
            print(result)
        assert lock.valid is False

    except Exception as e:
        print('Lock not acquired')


async def all_tasks():
    tasks = [asyncio.create_task(get_lock()) for i in range(10000)]
    await asyncio.wait(tasks)
asyncio.run(all_tasks())
error log →
task: <Task pending name='Task-84846' coro=<Aioredlock._set_lock.<locals>.cleanup() running at /usr/local/lib/python3.8/site-packages/aioredlock/algorithm.py:99> wait_for=<_GatheringFuturepending cb=[<TaskWakeupMethWrapper object at 0x7f8c476b9c70>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-84847' coro=<Aioredlock._set_lock.<locals>.cleanup() running at /usr/local/lib/python3.8/site-packages/aioredlock/algorithm.py:99> wait_for=<_GatheringFuturepending cb=[<TaskWakeupMethWrapper object at 0x7f8c487d0dc0>()]>>
Connection <RedisConnection [db:0]> has pending commands, closing it.
Connection <RedisConnection [db:0]> has pending commands, closing it.
Future exception was never retrieved
future: <Future finished exception=ConnectionForcedCloseError()>
aioredis.errors.ConnectionForcedCloseError
Future exception was never retrieved
future: <Future finished exception=ConnectionForcedCloseError()>
aioredis.errors.ConnectionForcedCloseError
Future exception was never retrieved
future: <Future finished exception=ConnectionForcedCloseError()>
aioredis.errors.ConnectionForcedCloseError
Future exception was never retrieved
[get_lock.log](https://github.com/joanvila/aioredlock/files/6438662/get_lock.log)

This is not a bug with aioredlock. Looks like for redis is closing the connection, and aioredis is raising an exception. Possibly because you are trying to open 10k connections at once.

2K connections at once will have the same problem. You can try.

This is not a bug with aioredlock. Looks like for redis is closing the connection, and aioredis is raising an exception. Possibly because you are trying to open 10k connections at once.

Can you do 2000 commands with aioredis like this? The error is not coming from aioredlock. It is coming from your redis instance closing the connections.

aioredis with 2000 set command not error .(set 'resource_name' uuid ex 10 nx)

aioredlock with 2000 set command.
error →
task: <Task pending name='Task-29924' coro=<Aioredlock._set_lock..cleanup() running at /usr/local/lib/python3.8/site-packages/aioredlock/algorithm.py:99> wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object at 0x7faba88ed910>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-29925' coro=<Aioredlock._set_lock..cleanup() running at /usr/local/lib/python3.8/site-packages/aioredlock/algorithm.py:99> wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object at 0x7fabcd3f4f10>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-29926' coro=<Aioredlock._set_lock..cleanup() running at /usr/local/lib/python3.8/site-packages/aioredlock/algorithm.py:99> wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object at 0x7faba88e98e0>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-29927' coro=<Aioredlock._set_lock..cleanup() running at /usr/local/lib/python3.8/site-packages/aioredlock/algorithm.py:99> wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object at 0x7faba86f7fd0>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-29928' coro=<Aioredlock._set_lock..cleanup() running at /usr/local/lib/python3.8/site-packages/aioredlock/algorithm.py:99> wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object at 0x7fabcd101700>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-29929' coro=<Aioredlock._set_lock..cleanup() running at /usr/local/lib/python3.8/site-packages/aioredlock/algorithm.py:99> wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object at 0x7faba8c00e50>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-29930' coro=<Aioredlock._set_lock..cleanup() running at /usr/local/lib/python3.8/site-packages/aioredlock/algorithm.py:99> wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object at 0x7faba967ac70>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-29931' coro=<Aioredlock._set_lock..cleanup() running at /usr/local/lib/python3.8/site-packages/aioredlock/algorithm.py:99> wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object at 0x7faba90278e0>()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-29932' coro=<Aioredlock._set_lock..cleanup() running at /usr/local/lib/python3.8/site-packages/aioredlock/algorithm.py:99> wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object at 0x7fabcd11a070>()]>>

/usr/local/lib/python3.8/site-packages/aioredlock/algorithm.py
.....
async def destroy(self):
self.log.debug('Destroying %s', repr(self))
for resource, lock in self._locks.items():
if lock.valid:
try:
await self.unlock(lock)
except Exception:
self.log.exception('Can not unlock "%s"', resource)

    self._locks.clear()
    self._watchdogs.clear()

    await self.redis.clear_connections()

self._locks.clear() self._watchdogs.clear()
Could this be the cause.

those just clear the dictionaries of all keys https://www.geeksforgeeks.org/python-dictionary-clear/