Ruby implementation of your average web crawler. Nothing fancy.
- get all the anchors on a website and add it to the `to_visit' list if they belong to the same domain.
- error handling. 404 and all the like.
- sanitize: remove empty, recognize which typeof link we encountered.
- implement the main loop of the crawler
- decide on stop conditions