Simple crawler based on ForkJoinPool tasks.
- Java:
1.8
How to install - Maven:
3.3.x
How to install - Make sure, that you have set JAVA_HOME and MAVEN_HOME environment variables mentioned in above links.
Make sure, that you have installed maven and java, then in project root folder type:
mvn clean install -DskipTests
Make sure, that you have installed maven and java, then in project root folder type:
mvn clean install
To run crawler run crawler.bat (Windows) or crawler.sh (Unix/Linux) with arguments given bellow:
usage: Windows: crawler -u [-d] [-g] | Linux/Unix: ./crawler.sh -u [-d] [-g] -u,--url Initial url from which crawler start. Url should has "http://" or "https://" prefix. -d,--depth Depth level of the crawler search. Default value is 100. [optional] -g,--grouped Grouping found links by PageLinkType and save them to separate files. [optional]
When crawler finishes work, discovered links are saved as xml files to %root_project_folder%/output/%given_url_as_param%/
- Software uses apache commons UrlValidator, which some correct links recognizes as invalid.
- Lack of validating files existence for grouping by PageLinkType links serialization. (Crawler does not know what type of links given domain has, so he does not know what name of files should find.)
- Crawler can't finds link from dynamic generating components. Maybe this can be feature extension?
- When link without protocol or domain address has been find, crawler pastes given as param url before it.
- Performance depends on user internet connection and visited server domain.
- Serialization to other type of files (i.e. Json)
- Mapping to other structure types (i.e. Map, where key is page, and value is it children)
- Better handling http and connection exceptions
- Excluding from serialization to file some domains given as param.
- Maybe improve concurrency algorithm?
- GUI