pulls reddit data from the pushshift api and renders offline compatible html pages
requires python 3
sudo apt-get install pip
pip install psaw
git clone https://github.com/chid/snudown
cd snudown
sudo python setup.py install
cd ..
git clone [this repo]
cd reddit-html-archiver
chmod u+x *.py
data is fetched by subreddit and date range and is stored as csv files in data
.
./fetch_links.py politics 2017-1-1 2017-2-1
# or add some link/post request filters
./fetch_links.py --self_only --score "> 2000" politics 2015-1-1 2016-1-1
./fetch_links.py -h
you may need decrease your date range or adjust pushshift_rate_limit_per_minute
in fetch_links.py
if you are getting connection errors.
write html files for all subreddits to r
.
./write_html.py
# or add some output filtering
./write_html.py --min-score 100 --min-comments 100 --hide-deleted-comments
./write_html.py -h
your html archive has been written to r
. once you are satisfied with your archive feel free to copy/move the contents of r
to elsewhere and to delete the git repos you have created. everything in r
is fully self contained.
to update an html archive, delete everything in r
aside from r/static
and re-run write_html.py
to regenerate everything.
copy the contents of the r
directory to a web root or appropriately served git repo.
- fetch_links
- num_comments filtering
- thumbnails or thumbnail urls
- media posts
- score update
- scores from reddit with praw
- view on reddit.com
- real templating
- filter output per sub, individual min score and comments filters
- js markdown url previews
- js powered search page, show no links by default
- user pages
- add pagination, posts sorted by score, comments, date, sub
- too many files in one directory