Download website to a local directory (including all css, images, js, etc.)
You can try it in demo app (source)
Note: by default dynamic websites (where content is loaded by js) may be saved not correctly because website-scraper
doesn't execute js, it only parses http responses for html and css files. If you need to download dynamic website take a look on website-scraper-phantom.
npm install website-scraper
var scrape = require('website-scraper');
var options = {
urls: ['http://nodejs.org/'],
directory: '/path/to/save/',
};
// with promise
scrape(options).then((result) => {
/* some code here */
}).catch((err) => {
/* some code here */
});
// or with callback
scrape(options, (error, result) => {
/* some code here */
});
- urls - urls to download, required
- directory - path to save files, required
- sources - selects which resources should be downloaded
- recursive - follow hyperlinks in html files
- maxRecursiveDepth - maximum depth for hyperlinks
- maxDepth - maximum depth for all dependencies
- request - custom options for for request
- subdirectories - subdirectories for file extensions
- defaultFilename - filename for index page
- prettifyUrls - prettify urls
- ignoreErrors - whether to ignore errors on resource downloading
- urlFilter - skip some urls
- filenameGenerator - generate filename for downloaded resource
- httpResponseHandler - customize http response handling
- resourceSaver - customize resources saving
- onResourceSaved - callback called when resource is saved
- onResourceError - callback called when resource's downloading is failed
- updateMissingSources - update url for missing sources with absolute url
- requestConcurrency - set maximum concurrent requests
Default options you can find in lib/config/defaults.js or get them using scrape.defaults
.
Array of objects which contain urls to download and filenames for them. Required.
scrape({
urls: [
'http://nodejs.org/', // Will be saved with default filename 'index.html'
{url: 'http://nodejs.org/about', filename: 'about.html'},
{url: 'http://blog.nodejs.org/', filename: 'blog.html'}
],
directory: '/path/to/save'
}).then(console.log).catch(console.log);
String, absolute path to directory where downloaded files will be saved. Directory should not exist. It will be created by scraper. Required.
Array of objects to download, specifies selectors and attribute values to select files for downloading. By default scraper tries to download all possible resources.
// Downloading images, css files and scripts
scrape({
urls: ['http://nodejs.org/'],
directory: '/path/to/save',
sources: [
{selector: 'img', attr: 'src'},
{selector: 'link[rel="stylesheet"]', attr: 'href'},
{selector: 'script', attr: 'src'}
]
}).then(console.log).catch(console.log);
Boolean, if true
scraper will follow hyperlinks in html files. Don't forget to set maxRecursiveDepth
to avoid infinite downloading. Defaults to false
.
Positive number, maximum allowed depth for hyperlinks. Other dependencies will be saved regardless of their depth. Defaults to null
- no maximum recursive depth set.
Positive number, maximum allowed depth for all dependencies. Defaults to null
- no maximum depth set.
Object, custom options for request. Allows to set cookies, userAgent, etc.
scrape({
urls: ['http://example.com/'],
directory: '/path/to/save',
request: {
headers: {
'User-Agent': 'Mozilla/5.0 (Linux; Android 4.2.1; en-us; Nexus 4 Build/JOP40D) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.166 Mobile Safari/535.19'
}
}
}).then(console.log).catch(console.log);
Array of objects, specifies subdirectories for file extensions. If null
all files will be saved to directory
.
/* Separate files into directories:
- `img` for .jpg, .png, .svg (full path `/path/to/save/img`)
- `js` for .js (full path `/path/to/save/js`)
- `css` for .css (full path `/path/to/save/css`)
*/
scrape({
urls: ['http://example.com'],
directory: '/path/to/save',
subdirectories: [
{directory: 'img', extensions: ['.jpg', '.png', '.svg']},
{directory: 'js', extensions: ['.js']},
{directory: 'css', extensions: ['.css']}
]
}).then(console.log).catch(console.log);
String, filename for index page. Defaults to index.html
.
Boolean, whether urls should be 'prettified', by having the defaultFilename
removed. Defaults to false
.
Boolean, if true
scraper will continue downloading resources after error occurred, if false
- scraper will finish process and return error. Defaults to true
.
Function which is called for each url to check whether it should be scraped. Defaults to null
- no url filter will be applied.
// Links to other websites are filtered out by the urlFilter
var scrape = require('website-scraper');
scrape({
urls: ['http://example.com/'],
urlFilter: function(url){
return url.indexOf('http://example.com') === 0;
},
directory: '/path/to/save'
}).then(console.log).catch(console.log);
String, name of one of the bundled filenameGenerators, or a custom filenameGenerator function. Filename generator determines where the scraped files are saved.
When the byType
filenameGenerator is used the downloaded files are saved by type (as defined by the subdirectories
setting) or directly in the directory
folder, if no subdirectory is specified for the specific type.
When the bySiteStructure
filenameGenerator is used the downloaded files are saved in directory
using same structure as on the website:
/
=>DIRECTORY/example.com/index.html
/about
=>DIRECTORY/example.com/about/index.html
//cdn.example.com/resources/jquery.min.js
=>DIRECTORY/cdn.example.com/resources/jquery.min.js
// Downloads all the crawlable files. The files are saved in the same structure as the structure of the website
// Links to other websites are filtered out by the urlFilter
var scrape = require('website-scraper');
scrape({
urls: ['http://example.com/'],
urlFilter: function(url){ return url.indexOf('http://example.com') === 0; },
recursive: true,
maxDepth: 100,
filenameGenerator: 'bySiteStructure',
directory: '/path/to/save'
}).then(console.log).catch(console.log);
Function which is called on each response, allows to customize resource or reject its downloading.
It takes 1 argument - response object of request module and should return resolved Promise
if resource should be downloaded or rejected with Error Promise
if it should be skipped.
Promise should be resolved with:
string
which contains response body- or object with properies
body
(response body, string) andmetadata
- everything you want to save for this resource (like headers, original text, timestamps, etc.), scraper will not use this field at all, it is only for result.
// Rejecting resources with 404 status and adding metadata to other resources
scrape({
urls: ['http://example.com/'],
directory: '/path/to/save',
httpResponseHandler: (response) => {
if (response.statusCode === 404) {
return Promise.reject(new Error('status is 404'));
} else {
// if you don't need metadata - you can just return Promise.resolve(response.body)
return Promise.resolve({
body: response.body,
metadata: {
headers: response.headers,
someOtherData: [ 1, 2, 3 ]
}
});
}
}
}).then(console.log).catch(console.log);
Scrape function resolves with array of Resource objects which contain metadata
property from httpResponseHandler
.
Class which saves Resources, should have methods saveResource
and errorCleanup
which return Promises. Use it to save files where you need: to dropbox, amazon S3, existing directory, etc. By default all files are saved in local file system to new directory passed in directory
option (see lib/resource-saver/index.js).
scrape({
urls: ['http://example.com/'],
directory: '/path/to/save',
resourceSaver: class MyResourceSaver {
saveResource (resource) {/* code to save file where you need */}
errorCleanup (err) {/* code to remove all previously saved files in case of error */}
}
}).then(console.log).catch(console.log);
Function called each time when resource is saved to file system. Callback is called with Resource object. Defaults to null
- no callback will be called.
scrape({
urls: ['http://example.com/'],
directory: '/path/to/save',
onResourceSaved: (resource) => {
console.log(`Resource ${resource} was saved to fs`);
}
})
Function called each time when resource's downloading/handling/saving to fs was failed. Callback is called with - Resource object and Error
object. Defaults to null
- no callback will be called.
scrape({
urls: ['http://example.com/'],
directory: '/path/to/save',
onResourceError: (resource, err) => {
console.log(`Resource ${resource} was not saved because of ${err}`);
}
})
Boolean, if true
scraper will set absolute urls for all failing sources
, if false
- it will leave them as is (which may cause incorrectly displayed page).
Also can contain array of sources
to update (structure is similar to sources).
Defaults to false
.
// update all failing img srcs with absolute url
scrape({
urls: ['http://example.com/'],
directory: '/path/to/save',
sources: [{selector: 'img', attr: 'src'}],
updateMissingSources: true
});
// download nothing, just update all img srcs with absolute urls
scrape({
urls: ['http://example.com/'],
directory: '/path/to/save',
sources: [],
updateMissingSources: [{selector: 'img', attr: 'src'}]
});
Number, maximum amount of concurrent requests. Defaults to Infinity
.
Callback function, optional, includes following parameters:
error
: if error -Error
object, if success -null
result
: if error -null
, if success - array of Resource objects containing:url
: url of loaded pagefilename
: filename where page was saved (relative todirectory
)children
: array of children Resources
This module uses debug to log events. To enable logs you should use environment variable DEBUG
.
Next command will log everything from website-scraper
export DEBUG=website-scraper*; node app.js
Module has different loggers for levels: website-scraper:error
, website-scraper:warn
, website-scraper:info
, website-scraper:debug
, website-scraper:log
. Please read debug documentation to find how to include/exclude specific loggers.