/Recipe-Scraper

A JS package for scraping recipes from the web.

Primary LanguageJavaScriptMIT LicenseMIT

recipe-scraper

A NodeJS package for scraping recipes from the web.

Build Status Coverage Status

Installation

npm install recipe-scraper

Usage

// import the module
const recipeScraper = require("recipe-scraper");

// enter a supported recipe url as a parameter - returns a promise
async function someAsyncFunc() {
  ...
  let recipe = await recipeScraper("some.recipe.url");
  ...
}

// using Promise chaining
recipeScraper("some.recipe.url").then(recipe => {
    // do something with recipe
  }).catch(error => {
    // do something with error
  });

Supported Websites

And many more! the list above is old fashioned scraping, but for all those websites who have google recipe ld json included, it will also work.

Recipe Object

Depending on the recipe, certain fields may be left blank. All fields are represented as strings or arrays of strings. The name, ingredients, and instructions properties are required for schema validation.

{
    name: "",
    ingredients: [],
    instructions: [],
    tags: [],
    servings: "",
    image: "",
    time: {
      prep: "",
      cook: "",
      active: "",
      inactive: "",
      ready: "",
      total: ""
    }
}

Error Handling

If a recipe is not found on the given url, the basic page info will be returned: title, image & description.

If the url provided is invalid and a domain is unable to be parsed, an error message will be returned.

recipeScraper("keyboard kitty").catch(error => {
  console.log(error.message);
  // => "Failed to parse domain"
});

If a page does not exist or some other 400+ error occurs when fetching, an error message will be returned.

recipeScraper("some.nonexistent.page").catch(error => {
  console.log(error.message);
  // => "No recipe found on page"
});

If a supported url does not contain the proper sub-url to be a valid recipe, an error message will be returned including the sub-url required.

recipeScraper("some.improper.url").catch(error => {
  console.log(error.message);
  // => "url provided must include '#subUrl'"
});

Bugs

With web scraping comes a reliance on the website being used not changing format. If this occurs we need to update our scrape. Please reach out if you are experiencing an issue.

Contributing

I welcome pull requests that keep the scrapes up to date or add new ones. I'm doing my best to keep this package maintained and with your help this goal is much more achievable. Please add testing if you add a scrape. Thank you 😁