Trafilatura seamlessly downloads, parses, and scrapes web page data: it can extract metadata, main body text and comments while preserving part of the text formatting and page structure. The output is converted to TXT, CSV, XML & TEI-XML.
Distinguishing between whole page and essential parts can help to alleviate many quality problems related to web texts as it deals with the noise consisting of recurring elements (headers and footers, ads, links/blogroll).
It has to be precise enough not to miss texts or discard valid documents, robust but also reasonably fast. It is designed to run in production on millions of web documents.
- Seamless download and extraction: URLs, HTML files or parsed HTML trees as input. Output in plain text (minimal formatting), CSV (with metadata, tab-separated values) or XML format (for metadata and structure)
- Focus on main text and/or comments, with structural elements preserved: paragraphs, titles, lists, quotes, code, line breaks, in-line text formatting (experimental)
- Extraction of metadata
- Robust extraction and generic readability and jusText algorithms used as fallback, reasonably efficient processing thanks to lxml
- Optional language detection on the extracted content
For more detailed results see the evaluation page and evaluation script. To reproduce the tests just clone the repository, install all necessary packages and run the evaluation script with the data provided in the tests directory.
300 documents, 869 text and 878 boilerplate segments (2020-03-19) | |||||
---|---|---|---|---|---|
Python Package | Precision | Recall | Accuracy | F-Score | Time |
baseline (text markup) | 0.726 | 0.776 | 0.742 | 0.750 | 1.14 |
justext 2.2.0 (German stoplist) | 0.849 | 0.529 | 0.719 | 0.652 | 6.37 |
newspaper3k 0.2.8 | 0.923 | 0.591 | 0.772 | 0.721 | 14.80 |
goose3 3.1.6 | 0.957 | 0.640 | 0.807 | 0.767 | 21.54 |
boilerpy3 1.0.2 (article mode) | 0.841 | 0.734 | 0.799 | 0.784 | 5.65 |
dragnet 2.0.4 | 0.909 | 0.722 | 0.825 | 0.804 | 3.64 |
readability-lxml 0.7.1 | 0.928 | 0.743 | 0.844 | 0.826 | 6.59 |
news-please 1.4.25 | 0.926 | 0.747 | 0.844 | 0.827 | 70.81 |
trafilatura 0.4 | 0.914 | 0.869 | 0.894 | 0.891 | 4.87 |
trafilatura 0.4 (+ fallback) | 0.925 | 0.904 | 0.916 | 0.914 | 9.94 |
Chiefly with the Python package manager pip
: pip install --upgrade trafilatura
.
For more details please read the installation documentation.
With Python or on the command-line.
In a nutshell, with Python:
>>> import trafilatura
>>> downloaded = trafilatura.fetch_url('https://github.blog/2019-03-29-leader-spotlight-erin-spiceland/')
>>> trafilatura.extract(downloaded)
# outputs main content and comments as plain text ...
On the command-line:
$ trafilatura -u "https://github.blog/2019-03-29-leader-spotlight-erin-spiceland/"
# outputs main content and comments as plain text ...
For more information please refer to the usage documentation.
trafilatura is distributed under the GNU General Public License v3.0
GPL and free software licensing: What's in it for business?
Online documentation: trafilatura.readthedocs.io
Trafilatura: Italian word for wire drawing.
- In order to gather web documents it can be useful to download the portions of a website programmatically, here is how to use sitemaps to crawl websites
- Content von Webseiten laden mit Trafilatura (Tutorial video in German by Simon Meier-Vieracker)
- Download von Web-Daten & Daten aufbereiten und verwalten (Tutorials in German by Noah Bubenhofer)
- [X] Metadata integration
- [-] Preservation of in-line text formatting (bold, italic, etc.)
- [-] Language detection on the extracted content
- [-] Duplicate detection at sentence, paragraph and document level using a least recently used (LRU) cache
- [-] XML output compatible with the recommendations of the Text Encoding Initiative
- [ ] Configuration and extraction parameters
Contributions are welcome!
Feel free to file bug reports on the issues page.
Thanks to these contributors who submitted features and bugfixes:
This effort is part of methods to derive information from web documents in order to build text databases for research (chiefly linguistic analysis and natural language processing). A significant challenge resides in the ability to extract and pre-process web texts to meet scientific expectations: Web corpus construction involves numerous design decisions, and this software packages can help facilitate collection and enhance corpus quality.
- Barbaresi, A. "Generic Web Content Extraction with Open-Source Software", Proceedings of KONVENS 2019, Kaleidoscope Abstracts, 2019.
- Barbaresi, A. "Efficient construction of metadata-enhanced web corpora", Proceedings of the 10th Web as Corpus Workshop (WAC-X), 2016.
You can contact me via my contact page or GitHub.