Xkcd
fazo96 opened this issue ยท 35 comments
I plan to archive all the comics in http://xkcd.com/
I think i'll use (comicnumber)-(comictitle).png for the image and figure out how to save the alt-text in the png metadata
Please post if you want to keep a copy of the archive or you manage to create it before I do :)
@fazo96 how are you going to manage the comics that are dynamic or contain multiple sequential images? Or the map ones that have a larger version available on click?
I would loooooove to have this. But we should make sure that randall is okay with it first, i'm not sure if there are any sort of copyrights involved here. (too bad he doesnt use github, or we could just ping him)
I would loooooove to have this. But we should make sure that randall is okay with it first, i'm not sure if there are any sort of copyrights involved here.
absolutely. thanks for saying this.
looks like everything is released CC-BY-NC
(too bad he doesnt use github, or we could just ping him)
i'm sure has an account. just have to find it \o/
- would be great to include a web viewer with the archive.
- maybe make a dir for every comic
- put the image in both
image.png
and<original-img-filename>
so that we respect his filenames too, but also make them predictably linked - put the alt text in a file, like
alt.txt
(Alternatively, mirror the RSS feed?)
The title and the alt text could be stored in the png metadata. You can use ImageMagic: see this.
Looks like license is not an issue as long as we provide credit to randall and include a copy of the license.
Also:
- Storing title and alt-text in the
png
metadata looks like the way to go! - @whyrusleeping As far as unconventional comics, we'll figure a solution out for every comic
- @jbenet a viewer would be great, but at this point, what do you guys think about including the entire website?
Uhm, I just found this in the About page of xkcd.com:
Is there an interface for automated systems to access comics and metadata?
Yes. You can get comics through the JSON interface, at URLs like http://xkcd.com/info.0.json (current comic) and http://xkcd.com/614/info.0.json (comic #614).
Getting the data will be a lot easier this way (no html parsing involved)
EDIT:
I wrote a node script that downloads and organizes data from xkcd.com and it worked!
I created a partial copy of xkcd.com to see if you like the setup (so that we can create a full copy later). I included Randall's about and license pages and my script in the folder ๐
You can check it out here: QmSeYATNaa2fSR3eMqRD8uXwujVLT2JU9wQvSjCd1Rf8pZ
I'm thinking about writing a simple index.html
to include in every comic's folder so that alt-text, image (and transcript) can all be seen comfortably on the same browser tab
๐
a viewer would be great, but at this point, what do you guys think about including the entire website?
I think #7 is also quite relevant here.
I completed the archive (now every image file and more is available via ipfs), it just a needs a viewer and probably better folder structure.
Here you go: QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme
@cryptix yeah I figured it was necessary :) if you'd like a try, the script I used to generate the directory tree is included in the directory. It's named xkcd-downloader.js
If I have time I'll implement it
I have scraped the entirety of xkcd.com and some of it's subdomains (Apparently cross-subdomain interlinking didn't work), the result is a very well functioning copy, available at the end of this comment.
EDIT:
Instructions for updating the archive:
- Download and install HTTrack. (Windows/linux/OSX)
- run
httrack xkcd.com -d -%F "" -%N1 -n +*.css +*.js +*.png +*.jpg +*.jpeg +*.gif -*.pdf -O $mirror,$cache
(orhttrack xkcd.com what-if.xkcd.com ...
to archive what-if as well) - The command should be done within
10 minutesa few hours on a decent link. - There may be some .delayed files in imgs.xkcd.com/comics; they contain proper data but have an invalid name. I have no clean solution, so use this command to fix it up:
cd $mirror/imgs.xkcd.com/comics && ls -1 | awk -F. '/delayed/ {print $0 " " $1".png"}' | xargs -n 2 mv
ipfs add -r $mirror
Switch explanation:
-d
- Allow to mirror subdomains (edit: Doesn't seem to work for some reason.)-%F ""
- Disable footer text (by default including timestamp), allowing deduplication of HTML across updates.-%N1
- Untested, but should fix the 'delayed' files for known file extensions.-n
- Archive resources "near" an HTML file, (scripts, css, images)+*.css +*.js +*.png +*.jpg +*.jpeg +*.gif
- Also archive all css, js, and images seen outside of HTML (included from JS or CSS, for example)-*.pdf
- Don't download external PDFs (when archiving what-if.xkcd.com)-O $mirror,$cache
- The resulting webpage is put into$mirror
, while httrack runtime info, logs, caches are put into$cache
.- (optional)
-%v2
- Add a progress and statistics display during crawl.
Archiving notes:
- While HTTrack supports an
--update
switch, it's broken if the-%F
option has an empty argument, so we need to re-crawl the site completely to update. - I don't recommend archiving what-if.xkcd.com using the command above, as for some reason, the crawler enters Wikipedia and downloads way too much.
- TODO: Check how well m.xkcd.com archives
- TODO: Archive "Hoverboard" game/comic (+ other interactive, if sensible)
Archive links (newest to oldest): (My IPNS entry might be more up to date)
Looks like the currently referenced version on the website isn't fully available.
@fazo96 do you have the original archive that's currently linked to on the archives.ipfs.io site? https://ipfs.io/ipfs/QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme
It doesn't currently seem to be fully available, but if you still have it I can pin it to my ipfs node. I'd try to reproduce the archive using the script in the archive, but I could only guess what the exact text was in the about and license files.
FWIW I just generated a new version of fazo96's archive that's linked to from the site and pinned it to my ipfs node, so the comics that I couldn't access through the gateway before (in the archive linked to from archives.ipfs.io) now seem to be accessible. The about
and license
files still seem to be unavailable, but I just added the relevant pages from the website to the version of the archive I just created.
Awesome, gonna pull that onto one of our storage nodes too. @leerspace wanna make a PR to update the site?
Cool thanks, I just updated https://archives.ipfs.io
@leerspace sorry for replying late, looks like I lost my copy of the original archive. Thanks for updating it! ๐
Hello, I've updated the archive using the xkcd-downloader.js
script offered in the repo, and it now has all comics up to the latest today (1862). It is currently pinned on my laptop, but I will pin it to my server when I get home so it will be available at all times.
Awesome, thanks @chosenken -- also pinned it on nihal.i.ipfs.io
Updated again to 1864, but this time attached it to an ipns: QmTaW8vRj4SkM6JhqVhAsibQE9PdJb5PQ2FMwPPc6gBi2h. I might work on a script that pulls new comics down and updates the ipns when it changes.
I'd like to update this one again, but to facilitate programmatic access, I'd like to change the structure slightly to something more like:
/ipfs/Qmahash/1/1 - Barrel - Part 1.png
...
/ipfs/Qmbhash/2003/2003 - Presidential Succession.png
where the comic files are contained within a 'folder' defined by the number rather than number and name. Any issues with this? I can host on our server, but I'd also be happy to submit a PR to update the archives.
@carsonfarmer that'd be rad. I've no objection to simplifying the folder structure.
I plan to feature this data set on the start page of the new IPLD Explorer page in the ipfs-webui.
@carsonfarmer could we get some zero padding on those indexes?
/ipfs/QmHash/0001/0001 - Barrel - Part 1.png
...
/ipfs/Qmbhash/2003/2003 - Presidential Succession.png
Ah sorry, was on vacation. Yes I'll update the indexes and post here when ready.
I've written a new program in go that creates an archive such as the following, /ipfs/QmdAChzF2JQCx9icrmYHZhFdRSv9TpRjq5q1v5b3ANpxRf. It also includes a csv with an index of post titles, published date and post number. I have submitted a pr, ipfs/awesome-ipfs#193
If I were someone who wanted to start pinning content like XKCD their own node(s) to help network redundancy, which hash would I use? There are many different hashes presented in this thread and it's not clear to me which one is the most relevant or up-to-date.
This seems to be the most up-to-date: Qmb8wsGZNXt5VXZh1pEmYynjB6Euqpq3HYyeAdw2vScTkQ
This one seems to be an exact replica of the first but it has a different hash, perhaps due to the lack of .html on "about" and "license": QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme
https://archives.ipfs.io/ seems to favor the first one, TkQ.
And this seems to be the same content as everything above but in a different structure: https://ipfs.io/ipfs/QmdAChzF2JQCx9icrmYHZhFdRSv9TpRjq5q1v5b3ANpxRf
So, which hash do I pin?
I think it would be a good idea to zero pad the numbers: right know it's impossible to browse.
I added padding to the downloader: /ipfs/QmX4pR3KKdivwY9Pn5mHNYi5FRhtTqapMfFmW4SYesstxU/xkcd-downloader.js
Whole: https://ipfs.io/ipfs/QmX4pR3KKdivwY9Pn5mHNYi5FRhtTqapMfFmW4SYesstxU
I'm not sure why it's not sorted though
I've just created an XKCD archive at /ipns/xkcd.hacdias.com
. It is updated every day ๐ Please see the repository for more info: https://github.com/hacdias/xkcd.hacdias.com
@Stebalien should we update the index to /ipns/xkcd.hacdias.com
?
I take it your comment is moot now that this has all moved to awesome.ipfs.io?
Is it possible to have two links? Ideally, we'd link to an immutable version as well.
@Stebalien we could also add an immutable version, but that would be a snapshot somewhere in the past. But yes, we could add it as a description perhaps.
Yeah, I know. It's just that archives that rely on DNS sketch me out a bit.
Please see ipfs/awesome-ipfs#261.