Unhandled 'error' event
Closed this issue · 12 comments
Got this error after downloading 155 from about a total of 3000 images
`[3088] Saved: 04007_autumnsunlightthroughthetrees_1280x1024.jpg
events.js:183
throw er; // Unhandled 'error' event
^
Error: read ECONNRESET
at _errnoException (util.js:992:11)
at TCP.onread (net.js:618:25)`
you had an error in the connection internet. if you try it again it will work again. there is no need to change the code for this, this was a real error and not interfacelift downloader.
I use it weekly, never an error, i download serial 1080p, 4k,5k all at once.
it is pretty pointless to report bugs when getting this kind of attitude
you just start the program again and if your internet is stable it will throw no error.
the bug was in the internet not in the program, although you could catch an error and handle it, but people try to work less then more when it still works with a stable internet ...
ECONNRESET error is a very common known error in nodejs.
it means the internet connection was stopped while you were using any/all/this program.
Sorry for the delayed response, somehow I missed this github issue.
I cannot reproduce this problem. Does this happen every time you run the script? If so, does it fail after the same number of files? If you wait a while does it start working again?
I'm thinking that it's either a connectivity issue or you're running into some kind of rate limiting from the web site.
I think your rate limit is scraping slow enough, i never had an ECONNRESET error. Rate limit would give a HTTP 500 status bad gateway.
This was a connection issue.
You can scrape it and during turn off the net. Then you will get this same error.
You could provide a nice error like the internet broken etc.. Etc..
Reopening as a possible improvement.
Yeah it would probably make sense to catch any errors thrown by http.get()
and print out a more user friendly error.
I think some retry logic would be a good addition to handle a randomly dropped connection. If it fails with a connection error then wait a few seconds (probably implement exponential backoff) and retry, repeat up to some limit then fail. This would make it all around more resilient to random failures.
Openwrt download logic is usually 5 retry, 5 sec, 10 sec, 15 sec, 20 sec, 30 sec, then it gives up.
Fixed - or at least improved - in commit 0357eb2e3187525dd3b7e18273d3c7d173ba6131
.
I have added error handling to download operations that will catch connection errors and retry the download after an increasing delay. The retry is limited to 3 attempts, after which it will move on to the next file in the download queue.
This fix is included in the latest 2.4.0 release.