Web crawler for Node.JS, both HTTP and HTTPS are supported.
npm install js-crawler
The crawler provides intuitive interface to crawl links on web sites. Example:
var Crawler = require("js-crawler");
new Crawler().configure({depth: 3})
.crawl("http://www.google.com", function onSuccess(page) {
console.log(page.url);
});
The call to configure
is optional, if it is omitted the default option values will be used.
onSuccess
callback will be called for each page that the crawler has crawled. page
value passed to the callback will contain the following fields:
url
- URL of the pagecontent
- body of the page (usually HTML)status
- the HTTP status code
Extra information can be retrieved from the rest of the page
fields: error
, response
, body
which are identical to the ones passed to the callback of request
invocation of the Request module.
Alternative APIs for passing callbacks to the crawl
function.
var Crawler = require("js-crawler");
var crawler = new Crawler().configure({ignoreRelative: false, depth: 2});
crawler.crawl({
url: "https://github.com",
success: function(page) {
console.log(page.url);
},
failure: function(page) {
console.log(page.status);
},
finished: function(crawledUrls) {
console.log(crawledUrls);
}
});
It is possible to pass an extra callback to handle errors, consider the modified example above:
var Crawler = require("js-crawler");
new Crawler().configure({depth: 3})
.crawl("http://www.google.com", function(page) {
console.log(page.url);
}, function(response) {
console.log("ERROR occurred:");
console.log(response.status);
console.log(response.url);
});
Here the second callback will be called for each page that could not be accessed (maybe because the corresponding site is down). status
may be not defined.
Extra callback can be passed that will be called when all the urls have been crawled and crawling has finished. All crawled urls will be passed to that callback as an argument.
var Crawler = require("js-crawler");
new Crawler().configure({depth: 2})
.crawl("http://www.google.com", function onSuccess(page) {
console.log(page.url);
}, null, function onAllFinished(crawledUrls) {
console.log('All crawling finished');
console.log(crawledUrls);
});
By default the maximum number of HTTP requests made per second is 100, but this can be adjusted by using the option maxRequestsPerSecond
if one wishes not to use too much of network or, opposite, wishes for yet faster crawling.
var Crawler = require("js-crawler");
var crawler = new Crawler().configure({maxRequestsPerSecond: 2});
crawler.crawl({
url: "https://github.com",
success: function(page) {
console.log(page.url);
},
failure: function(page) {
console.log(page.status);
}
});
With this configuration only at most 2 requests per second will be issued. The actual request rate depends on the network speed as well, maxRequestsPerSecond
configures just the upper boundary.
maxRequestsPerSecond
can also be fractional, the value 0.1
, for example, would mean maximum one request per 10 seconds.
Even more flexibility is possible when using maxConcurrentRequests
option, it limits the number of HTTP requests that can be active simultaneously.
If the number of requests per second is too high for a given set of sites/network requests may start to pile up, then specifying maxConcurrentRequests
can help ensure that the network is not overloaded with piling up requests.
It is possible to customize both options in case we are not sure how performant the network and sites are. Then maxRequestsPerSecond
limits how many requests the crawler is allowed to make and maxConcurrentRequests
allows to specify how should the crawler adjust its rate of requests depending on the real-time performance of the network/sites.
var Crawler = require("js-crawler");
var crawler = new Crawler().configure({
maxRequestsPerSecond: 10,
maxConcurrentRequests: 5
});
crawler.crawl({
url: "https://github.com",
success: function(page) {
console.log(page.url);
},
failure: function(page) {
console.log(page.status);
}
});
By default the values are as follows:
maxRequestsPerSecond
100
maxConcurrentRequests
10
That is, we expect on average that 100 requests will be made every second and only 10 will be running concurrently, and every request will take something like 100ms to complete.
By default a crawler instance will remember all the urls it ever crawled and will not crawl them again. In order to make it forget all the crawled urls the method forgetCrawled
can be used. There is another way to solve the same problem: create a new instance of a crawler.
depth
- the depth to which the links from the original page will be crawled. Example: ifsite1.com
contains a link tosite2.com
which contains a link tosite3.com
,depth
is 2 and we crawl fromsite1.com
then we will crawlsite2.com
but will not crawlsite3.com
as it will be too deep.
The default value is 2
.
ignoreRelative
- ignore the relative URLs, the relative URLs on the same page will be ignored when crawling, so/wiki/Quick-Start
will not be crawled andhttps://github.com/explore
will be crawled. This option can be useful when we are mainly interested in sites to which the current sites refers and not just different sections of the original site.
The default value is false
.
userAgent
- User agent to send with crawler requests.
The default value is crawler/js-crawler
-
shouldCrawl
- function that specifies whether an url should be crawled, returnstrue
orfalse
. -
maxRequestsPerSecond
- the maximum number of HTTP requests per second that can be made by the crawler, default value is 100 -
maxConcurrentRequests
- the maximum number of concurrent requests that should not be exceeded by the crawler, the default value is 10
Example:
var Crawler = require("js-crawler");
var crawler = new Crawler().configure({
shouldCrawl: function(url) {
return url.indexOf("reddit.com") < 0;
}
});
crawler.crawl("http://www.reddit.com/r/javascript", function(page) {
console.log(page.url);
});
Default value is a function that always returns true
.
MIT License (c) Anton Ivanov
The crawler depends on the following Node.JS modules: