CuddleBear92/Hydrus-Presets-and-Scripts

Kemono Downloader Keeps throwing 403 and 502

Burby500 opened this issue · 5 comments

Full errors:
Ran out of reattempts on this error: 502: … (Copy note to see full error)
Traceback (most recent call last):
File "hydrus\client\networking\ClientNetworkingJobs.py", line 1257, in Start
raise e
hydrus.core.HydrusExceptions.ShouldReattemptNetworkException: 502:

<title>502 Bad Gateway</title>

502 Bad Gateway


nginx/1.18.0 (Ubuntu) <title>502 Bad Gateway</title>

502 Bad Gateway


nginx/1.18.0 (Ubuntu) <title>502 Bad Gateway</title>

502 Bad Gateway


nginx/1.18.0 (Ubuntu) <title>502 Bad Gateway</title>

502 Bad Gateway


nginx/1.18.0 (Ubuntu) <title>502 Bad Gateway</title>

502 Bad Gateway


nginx/1.18.0 (Ubuntu)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "hydrus\client\importing\ClientImportFileSeeds.py", line 1144, in WorkOnURL
self.DownloadAndImportRawFile( file_url, file_import_options, network_job_factory, network_job_presentation_context_factory, status_hook )
File "hydrus\client\importing\ClientImportFileSeeds.py", line 317, in DownloadAndImportRawFile
network_job.WaitUntilDone()
File "hydrus\client\networking\ClientNetworkingJobs.py", line 1533, in WaitUntilDone
raise self._error_exception
File "hydrus\client\networking\ClientNetworkingJobs.py", line 1287, in Start
raise HydrusExceptions.NetworkInfrastructureException( 'Ran out of reattempts on this error: ' + str( e ) )
hydrus.core.HydrusExceptions.NetworkInfrastructureException: Ran out of reattempts on this error: 502:

<title>502 Bad Gateway</title>

502 Bad Gateway


nginx/1.18.0 (Ubuntu) <title>502 Bad Gateway</title>

502 Bad Gateway


nginx/1.18.0 (Ubuntu) <title>502 Bad Gateway</title>

502 Bad Gateway


nginx/1.18.0 (Ubuntu) <title>502 Bad Gateway</title>

502 Bad Gateway


nginx/1.18.0 (Ubuntu) <title>502 Bad Gateway</title>

502 Bad Gateway


nginx/1.18.0 (Ubuntu)

nothing wrong with the parser at all, the site and servers are just shit and are under constant heavy load. nothing we can about it. its up to them to make their servers respond faster and download faster. 64KB download speed for files is not normal at all. but it is for them.

It looks like requests for objects e.g. https://kemono.party/attachments/xxxxxx/xxxxxx/xxxxxx.jpg receive a 302 and get balanced onto one of several datax.kemono.party subdomains. If you remove the data subdomain from the parser's downloadable/pursuable urls you can let the server do its redirect and get better results.

you might actually be right. i did not make the full parser as a whole and overlooked that regex replacement part there. no idea why it would replace the url and prepend the site like that. need to test a bit. hmm might help a bit for sure if it splits shit up across the servers.

testing the parser without those regex changes now and it seemingly work. will have to test a bit more then ill push a new release for it. it should speed up the file downloads for sure... in theory

Cool, although I should say it was throwing the same error for attachments/xxx...xxx.gif as well.