/wikiextractor

A tool for extracting plain text from Wikipedia dumps

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

WikiExtractor

WikiExtractor.py is a Python script that extracts and cleans text from a Wikipedia database dump.

The tool is written in Python and requires Python 2.7 but no additional library.

For further information, see the project Home Page or the Wiki.

Wikipedia Cirrus Extractor

cirrus-extractor.py is a version of the script that performs extraction from a Wikipedia Cirrus dump. Cirrus dumps contain text with already expanded templates.

Cirrus dumps are available at: cirrussearch.

Details

WikiExtractor performs template expansion by preprocesssng the whole dump and extracting template definitions.

In order to speed up processing:

  • multiprocessing is used for dealing with articles in parallel
  • a cache is kept of parsed templates (only useful for repeated extractions).

Installation

The script may be invoked directly, however it can be installed by doing:

(sudo) python setup.py install

Usage

The script is invoked with a Wikipedia dump file as an argument. The output is stored in several files of similar size in a given directory. Each file will contains several documents in this document format.

usage: WikiExtractor.py [-h] [-o OUTPUT] [-b n[KMG]] [-c] [--html] [-l]
		    [-ns ns1,ns2] [-s] [--templates TEMPLATES]
		    [--no-templates] [--processes PROCESSES] [-q] [--debug]
		    [-a] [-v]
		    input

positional arguments:
  input                 XML wiki dump file

optional arguments:
  -h, --help            show this help message and exit
  --processes PROCESSES number of processes to use (default: number of CPU cores)

Output:
  -o OUTPUT, --output OUTPUT
		    a directory where to store the extracted files (or '-' for dumping to
                        stdout)
  -b n[KMG], --bytes n[KMG]
                        maximum bytes per output file (default 1M)
  -c, --compress        compress output files using bzip

Processing:
  --html                produce HTML output, subsumes --links
  -l, --links           preserve links
  --lists               preserve lists
  -ns ns1,ns2, --namespaces ns1,ns2
		    accepted namespaces
  --templates TEMPLATES
		    use or create file containing templates
  --no-templates        Do not expand templates
  --escapedoc           use to escape the contents of the output
                        <doc>...</doc>
  --no-doc              the output won't have the lines <doc> and </doc>
  --no-title            the output won't have the titles of the articles

Special:
  -q, --quiet           suppress reporting progress info
  --debug               print debug info
  -a, --article         analyze a file containing a single article (debug option)
  -v, --version         print program version

Saving templates to a file will speed up performing extraction the next time, assuming template definitions have not changed.

Option --no-templates significantly speeds up the extractor, avoiding the cost of expanding MediaWiki templates.

For further information, visit the documentation.