You ever saw a youtube channel that had some quality posts that gave you the urge to archive them, *ahem* Kamitsurugi *ahem*? No? Welp, if you ever do, you can use my crappy, unoptimized tool. No worries.
You will need to have to get selenium for python and the chromium webdriver. Your chromedriver is usually stored in `/usr/lib/chromium/chromedriver in Linux. All other needed libraries should be default.
Make a .env
that follows the provided .env.example
. CHANNEL
is obviously the channel nick (@whoever) and FOLDER
is the folder the script needs to create and use; can be expressed as a full path.
The scraper.py
file uses argparse and therefore has an -h
help option.
There's only 2 flags available:
-s
: This one scrapes all the available community posts from the specified channel. Stores the available posts inurls.txt
and images inFOLDER
.-u
: Since Youtube limits how far back you can view the posts, better run the update utility time to time so the posts are up to date. The settings again taken from the.env
. I use a simple CRON job that timely runs this.
NOTE: Posts that are not images are ignored