/trafilatura

Web scraping library: downloads pages, extracts metadata, main text and comments, converts to TXT, CSV, XML & TEI

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

trafilatura: Scrapes the main text of web pages while preserving some structure

Python package Python versions Documentation Status Travis build status Code Coverage

Demo as GIF image

Description

Trafilatura seamlessly downloads, parses, and scrapes web page data: it can extract metadata, main body text and comments while preserving part of the text formatting and page structure. The output is converted to TXT, CSV, XML & TEI-XML.

Distinguishing between whole page and essential parts can help to alleviate many quality problems related to web texts as it deals with the noise consisting of recurring elements (headers and footers, ads, links/blogroll).

It has to be precise enough not to miss texts or discard valid documents, robust but also reasonably fast. It is designed to run in production on millions of web documents.

Features

  • Seamless download and extraction: URLs, HTML files or parsed HTML trees as input. Output in plain text (minimal formatting), CSV (with metadata, tab-separated values) or XML format (for metadata and structure)
  • Focus on main text and/or comments, with structural elements preserved: paragraphs, titles, lists, quotes, code, line breaks, in-line text formatting (experimental)
  • Extraction of metadata
  • Robust extraction and generic readability and jusText algorithms used as fallback, reasonably efficient processing thanks to lxml
  • Optional language detection on the extracted content

Evaluation and alternatives

For more detailed results see the evaluation page and evaluation script. To reproduce the tests just clone the repository, install all necessary packages and run the evaluation script with the data provided in the tests directory.

300 documents, 869 text and 878 boilerplate segments (2020-03-19)
Python Package Precision Recall Accuracy F-Score Time
baseline (text markup) 0.726 0.776 0.742 0.750 1.14
justext 2.2.0 (German stoplist) 0.849 0.529 0.719 0.652 6.37
newspaper3k 0.2.8 0.923 0.591 0.772 0.721 14.80
goose3 3.1.6 0.957 0.640 0.807 0.767 21.54
boilerpy3 1.0.2 (article mode) 0.841 0.734 0.799 0.784 5.65
dragnet 2.0.4 0.909 0.722 0.825 0.804 3.64
readability-lxml 0.7.1 0.928 0.743 0.844 0.826 6.59
news-please 1.4.25 0.926 0.747 0.844 0.827 70.81
trafilatura 0.4 0.914 0.869 0.894 0.891 4.87
trafilatura 0.4 (+ fallback) 0.925 0.904 0.916 0.914 9.94

Installation

Chiefly with the Python package manager pip: pip install --upgrade trafilatura.

For more details please read the installation documentation.

Usage

With Python or on the command-line.

In a nutshell, with Python:

>>> import trafilatura
>>> downloaded = trafilatura.fetch_url('https://github.blog/2019-03-29-leader-spotlight-erin-spiceland/')
>>> trafilatura.extract(downloaded)
# outputs main content and comments as plain text ...

On the command-line:

$ trafilatura -u "https://github.blog/2019-03-29-leader-spotlight-erin-spiceland/"
# outputs main content and comments as plain text ...

For more information please refer to the usage documentation.

License

trafilatura is distributed under the GNU General Public License v3.0

GPL and free software licensing: What's in it for business?

Going further

Online documentation: trafilatura.readthedocs.io

Trafilatura: Italian word for wire drawing.

Roadmap

  • [X] Metadata integration
  • [-] Preservation of in-line text formatting (bold, italic, etc.)
  • [-] Language detection on the extracted content
  • [-] Duplicate detection at sentence, paragraph and document level using a least recently used (LRU) cache
  • [-] XML output compatible with the recommendations of the Text Encoding Initiative
  • [ ] Configuration and extraction parameters

Contributing

Contributions are welcome!

Feel free to file bug reports on the issues page.

Thanks to these contributors who submitted features and bugfixes:

Author

This effort is part of methods to derive information from web documents in order to build text databases for research (chiefly linguistic analysis and natural language processing). A significant challenge resides in the ability to extract and pre-process web texts to meet scientific expectations: Web corpus construction involves numerous design decisions, and this software packages can help facilitate collection and enhance corpus quality.

You can contact me via my contact page or GitHub.