This simple pyton script quickly downloads all images from a wikipedia article into a folder.
Windows:
Download the source code from github or with command git clone https://github.com/strny0/wiki-image-scraper
Install dependencies
pip install -r requirements.txt
Run the program
py scrape_wiki.py links.txt ./output
Linux/Mac:
Download the source code from github or with command git clone https://github.com/strny0/wiki-image-scraper
Install dependencies
pip3 install -r requirements.txt
Run the program
python3 scrape_wiki.py links.txt ./output
Example file with urls
file: links.txt contents:
https://en.wikipedia.org/wiki/Hạ_Long_Bay
https://en.wikipedia.org/wiki/Victoria_Falls
https://en.wikipedia.org/wiki/Great_Zimbabwe
MIT