Avoid repetition... use functions
badmojr opened this issue · 3 comments
Hello!
I believe you can reduce the code size if repetitions were avoided.
eg:
cURL() { curl -L -R -s --compressed --connect-timeout 10 --retry 5 --retry-connrefused --retry-delay 5 -A 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36' "$@"; }
You can also save time used for downloads if parallel
is used with cURL.
e.g:
URLs=(
badmojr_1hosts_xtra_domains.txt.raw,https://badmojr.github.io/1Hosts/Xtra/domains.txt
shadowwhisperer_malware.txt.raw,https://raw.githubusercontent.com/ShadowWhisperer/BlockLists/master/Lists/Malware
)
export -f cURL
parallel -j2 --colsep "," cURL -o '{1}' '{2}' :::: <<< $(printf "%s\n" ${URLs[@]})
Cheers!
Hi! thanks for your advice!
I was a bit lazy at cleaning my code and using functions 😄 (since the code was just working!)
You motivated me, and I decided to separate the sources list from code and implemented parallel mode. Currently I've set max 10 parallel downloads to avoid any server-side blocking or reaching github limitations.
Feel free to report any issues/requests and I will be happy to fix them! 😉
Thanks again!
As I feared, some sites trigger a download limit in parallel mode. also with comparing the saved time, there is not much difference here in github actions (it is fast enough), so I decided to use sequential download mode for now.
Hey,
Idk what happened but missed notifications from this.. Sorry for not replying soon.
Good work on the implementation 👌🏾 .