Learn about more WebPageTest API Integrations in our docs
WebPageTest Crawler
The WebPageTest Crawler, crawls through the website to fetch URLs and then runs test on them. Level and URL limit can be given.
Requires node, npm.
1. Installing Packages
Once you have cloned the project run npm install
to install dependencies.
npm install
2. Updating config values
There are 3 main config values : -
- wpt_api_key - WebPageTest API Key. Get yours here
- level - integer value, specifies maximum depth the crawler should crawl.
- url_limit - integer value, specifies maximum limit of URLs need to tested. Note : - Crawling stops if either of them reaches a limit.
3. Adding a initial URLs txt file
You can add your initial set of URLs to the startingUrls.txt file by seperating them using a coma.
4. Lets fire it up
Start the node-server by running npm start
npm start
Booyah, once the crawl-testing is complete you'll have a report.csv file which includes performance details of the URLs crawled.
Running as a npm module
You can the project as npm module as well.
1. Install as npm module
npm i https://github.com/abdulsuhail/wpt-crawler.git
2. Lets fire it up
npx wpt-crawler -k "wpt_api_key" -f "./startingUrls.txt"
3. To lookup more options
npx wpt-crawler -h