xmendez/wfuzz

Problem with not finishing execution when "Operation timed out" using more than 10 threads

Internon opened this issue · 2 comments

Problem with not finishing execution when "Operation timed out" using more than 10 threads

Context

Please check:

  • [ x] I've read the docs for Wfuzz

Please describe your local environment:

Wfuzz version: Output of wfuzz --version
3.1.0

Python version: Output of python --version
Tested in 2.7.18 and 3.9.1+

OS:
Linux kali 5.10.0-kali3-amd64

Report

What is the current behavior?

Sometimes the script gets stuck after raising the following error and sometimes it finishes after the error.
" /usr/local/lib/python3.9/dist-packages/wfuzz/wfuzz.py:77: UserWarning:Fatal exception: Pycurl error 28: Operation timed out after 10002 milliseconds with 0 bytes received"
image

The execution on the image was an hour ago and it is still not finishing the execution
What is the expected or desired behavior?

Always end the execution of wfuzz when error "Operation timed out" raises if i don't add the -Z parameter

Please provide steps to reproduce, including exact wfuzz command executed and output:

The server is a web application that is blocking me after 245 requests and unblocking me after 30 seconds.

Command:
wfuzz --conn-delay 10 --req-delay 10 --efield url -t 40 --filter "not (c=BBB and l=BBB and w=BBB)" -w /home/kali/Desktop/tools/inter-recon/dictionaries/without-slash/dict-small-without-slash.txt --zE urlencode -f $(pwd)/ips-left/fuzzing/https---10.11.1.44-8000-.txt -L https://10.11.1.44:8000/FUZZ{asdfnottherexxxasdf}

Other relevant information:

I was not able to see a relation between when it gets stuck and when it finishes the execution.
My idea is to automatize this process but i can't automatize it if sometimes it gets stuck on the process when raises an error.
The other possibility for automatize is to add the "-Z" parameter but I will have problems when there is a server that blocks me after X requests because i will need to wait the --con-delay and --req-delay time for each request or when the VPN is down.
Raising the --con-delay and --req-delay will help on these host fuzzing but not on the automation.

Hi,

Seems that the problem is related with the "th.join()" on the file "/usr/local/lib/python3.9/dist-packages/wfuzz/myhttp.py" method "cleanup" line "120".
I have debugged the python files and the final step is the join method, I'm not a developer so I'm not familiar with these types of errors but it seems that it is waiting the thread to stop and the thread never stops.

If I comment the "th.join()" line, the execution finishes without any problem. Despite this workaround I think that this is not the best way to solve this issue, do you have any possible solution on this?

I will keep testing/reading.

After all, the problem seems that resides on the self.mutex_stats (that is locking the thread while the execution of th.join()) of the following functions :
_process_curl_handle_error
deregister
image

The "deregister" and the "_process_curl_handle_error" are running in parallel.
At one point the "deregister" is locking the thread and after that, it is calling the "th.join()".
Before calling the th.join(), the function "_process_curl_handle_error" is being waiting for the thread lock release of the "deregister" but the "deregister" never releases the thread as it calls the th.join() blocking the main thread until the thread ends.

With that, the thread never ends because the function "_process_curl_handle_error" is waiting for the thread release of the "deregister" that is waiting the end of the th.join() in order to release the thread that is waiting the function "_process_curl_handle_error" to end and it is making a ¿deadlock?.

Sometimes, the "deregister" is being executed after finishing the "_process_curl_handle_error" and it is finishing the execution of wfuzz as the "th.join()" doesn't need to wait for the function "_process_curl_handle_error" because it has already finished.

I think that if you make this process sequential and execute the function "_process_curl_handle_error" before the "deregister", it will solve this issue.