MediaWiki Scraper
MediaWiki Scraper can archive wikis from the largest to the tiniest.
MediaWiki Scraper
is an ongoing project to port the legacy wikiteam
toolset to Python 3 and PyPI to make it more accessible for today's archivers.
Most of the focus has been on the core dumpgenerator
tool, but Python 3 versions of the other wikiteam
tools may be added over time.
MediaWiki Scraper Toolset
MediaWiki Scraper is a set of tools for archiving wikis. The main general-purpose module of MediaWiki Scraper is dumpgenerator, which can download XML dumps of MediaWiki sites that can then be parsed or redeployed elsewhere.
Python Environment
MediaWiki Scraper
requires Python 3.8 or later (less than 4.0), but you may be able to get it run with earlier versions of Python 3. On recent versions of Linux and macOS Python 3.8 should come preinstalled, but on Windows you will need to install it from python.org.
MediaWiki Scraper
has been tested on Linux, macOS, Windows and Android. If you are connecting to Linux or macOS via ssh
, you can continue using the bash
or zsh
command prompt in the same terminal, but if you are starting in a desktop environment and don't already have a preferred Terminal environment you can try one of the following.
NOTE: You may need to update and pre-install dependencies in order for
MediaWiki Scraper
to work properly. Shell commands for these dependencies appear below each item in the list. (Also note that while installing and runningMediaWiki Scraper
itself should not require administrative priviliges, installing dependencies usually will.)
-
On desktop Linux you can use the default terminal application such as Konsole or GNOME Terminal.
Linux Dependencies
While most Linux distributions will have Python 3 preinstalled, if you are cloning
MediaWiki Scraper
rather than downloading it directly you may need to installgit
.On Debian, Ubuntu, and the like:
sudo apt update && sudo apt upgrade && sudo install git
(On Fedora, Arch, etc., use
dnf
,pacman
, etc., instead.) -
On macOS you can use the built-in application Terminal, which is found in
Applications/Utilities
.macOS Dependencies
While macOS will have Python 3 preinstalled, if you are cloning
MediaWiki Scraper
rather than downloading it directly and you are using an older versions of macOS, you may need to installgit
.If
git
is not preinstalled, however, macOS will prompt you to install it the first time you run the command. Therefore, to check whether you havegit
installed or to installgit
, simply rungit
(with no arguments) in Terminal:git
If
git
is already installed, it will print its usage instructions. Ifgit
is not preinstalled, the command will pop up a window asking if you want to install Apple's command line developer tools, and clicking "Install" in the popup window will installgit
. -
On Windows 10 or Windows 11 you can use Windows Terminal.
Windows Dependencies
The latest version of Python is available from python.org. Python will then be available from any Command Prompt or PowerShell session. Optionally, adding C:\Program Files\Git\usr\bin to the PATH environment variable will add some some useful Linux commands and utilities to Command Prompt.
If you are already using the Windows Subsystem for Linux, you can follow the Linux instructions above. If you don't want to install a full WSL distribution, Git for Windows provides Bash emulation, so you can use it as a more lightweight option instead. Git Bash also provides some useful Linux commands and utilities.
When installing Python 3.8 (from python.org), be sure to check "Add Python to PATH" so that installed Python scripts are accessible from any location. If for some reason installed Python scripts, e.g.
pip
, are not available from any location, you can add Python to thePATH
environment variable using the instructions here.And while doing so should not be necessary if you follow the instructions further down and install
MediaWiki Scraper
usingpip
, if you'd prefer that Windows store installed Python scripts somewhere other than the default Python folder under%appdata%
, you can also add your preferred alternative path such asC:\Program Files\Python3\Scripts\
or a subfolder ofMy Documents
. (You will need to restart any terminal sessions in order for this to take effect.)Whenever you'd like to run a Bash session, you can open a Bash terminal prompt from any folder in Windows Explorer by right-clicking and choosing the option from the context menu. (For some purposes you may wish to run Bash as an administrator.) This way you can open a Bash prompt and clone the
MediaWiki Scraper
repository in one location, and subsequently or later open another Bash prompt and runMediaWiki Scraper
to dump a wiki wherever else you'd like without having to browse to the directory manually using Bash. -
On Android you can use Termux.
Termux Dependencies
pkg update && pkg upgrade && pkg install git libxslt python
-
On iOS you can use iSH.
iSH Dependencies
apk update && apk upgrade && apk add git py3-pip
Note: iSH may automatically quit if your iOS device goes to sleep, and it may lose its status if you switch to another app. You can disable auto-sleep while iSH is running by clicking the gear icon and toggling "Disable Screen Dimming". (You may wish to connect your device to a charger while running iSH.)
Downloading and installing dumpgenerator
The Python 3 port of the dumpgenerator
module of wikiteam3
is largely functional and can be installed from a downloaded or cloned copy of this repository.
There are two versions of these instructions:
- If you just want to use a version that mostly works
- If you want to follow my progress and help me test my latest commit
If you run into a problem with the version that mostly works, you can open an Issue. Be sure to include the following:
- The operating system you're using
- What command you ran that didn't work
- What output was printed to your terminal
MediaWiki Scraper
1. Downloading and installing In whatever folder you use for cloned repositories:
git clone https://github.com/mediawiki-client-tools/mediawiki-scraper
cd mediawiki-scraper
poetry update && poetry install && poetry build
pip install --force-reinstall dist/*.whl
dumpgenerator
for whatever purpose you need
2. Running dumpgenerator [args]
3. Uninstalling the package and deleting the cloned repository when you're done
pip uninstall wikiteam3
rm -fr [cloned_MediaWiki Scraper_folder]
4. Updating MediaWiki Scraper
Note: Re-run the following steps each time to reinstall each time the MediaWiki Scraper branch is updated.
git pull
poetry update && poetry install && poetry build
pip install --force-reinstall dist/*.whl
MediaWiki Scraper
5. Manually build and install If you'd like to manually build and install MediaWiki Scraper
from a cloned or downloaded copy of this repository, run the following commands from the downloaded base directory:
curl -sSL https://install.python-poetry.org | python3 -
poetry update && poetry install && poetry build
pip install --force-reinstall dist/*.whl
6. To run the test suite
To run the test suite, run:
test-dumpgenerator
7. Switching branches
git checkout --track origin/python3
dumpgenerator
(once installed)
Using After installing MediaWiki Scraper
using pip
you should be able to use the dumpgenerator
command from any local directory.
For basic usage, you can run dumpgenerator
in the directory where you'd like the download to be.
For a brief summary of the dumpgenerator
command-line options:
dumpgenerator --help
Several examples follow.
Note: the
\
and line breaks in the examples below are for legibility in this documentation. Rundumpgenerator
with the arguments in a single line and a single space between.
Downloading a wiki with complete XML history and images
dumpgenerator http://wiki.domain.org --xml --images
api.php
and/or index.php
Manually specifying If the script can't find itself the api.php
and/or index.php
paths, then you can provide them:
dumpgenerator --api http://wiki.domain.org/w/api.php --xml --images
dumpgenerator --api http://wiki.domain.org/w/api.php --index http://wiki.domain.org/w/index.php \
--xml --images
If you only want the XML histories, just use --xml
. For only the images, just --images
. For only the current version of every page, --xml --curonly
.
Resuming an incomplete dump
dumpgenerator \
--api http://wiki.domain.org/w/api.php --xml --images --resume --path /path/to/incomplete-dump
In the above example, --path
is only necessary if the download path is not the default.
dumpgenerator
will also ask you if you want to resume if it finds an incomplete dump in the path where it is downloading.
launcher
Using launcher
is a way to download a large list of wikis with a single invocation.
Usage:
launcher path-to-apis.txt [--7z-path path-to-7z] [--generator-arg=--arg] ...
launcher
will download a complete dump (XML and images) for a list of wikis, then compress the dump into two 7z
files: history
(containing only metadata and the XML history of the wiki) and wikidump
(containing metadata, XML, and images). This is the format that is suitable for upload to a WikiTeam item on the Internet Archive.
launcher
will resume incomplete dumps as appropriate and will not attempt to download wikis that have already been downloaded (as determined by the files existing in the working directory).
Each wiki will be stored into files contiaining a stripped version of the url and the date the dump was started.
path-to-apis.txt
is a path to a file that contains a list of URLs to api.php
s of wikis, one on each line.
By default, a 7z
executable is found on PATH
. The --7z-path
argument can be used to use a specific executable instead.
The --generator-arg
argument can be used to pass through arguments to the generator
instances that are spawned. For example, one can use --generator-arg=--xmlrevisions
to use the modern MediaWiki API for retrieving revisions or --generator-arg=--delay=2
to use a delay of 2 seconds between requests.
uploader
Using uploader
is a way to upload a large set of already-generated wiki dumps to the Internet Archive with a single invocation.
Usage:
uploader [-pd] [-pw] [-a] [-c COLLECTION] [-wd WIKIDUMP_DIR] [-u] [-kf KEYSFILE] [-lf LOGFILE] listfile
For the positional parameter listfile
, uploader
expects a path to a file that contains a list of URLs to api.php
s of wikis, one on each line (exactly the same as launcher
).
uploader
will search a configurable directory for files with the names generated by launcher
and upload any that it finds to an Internet Archive item. The item will be created if it does not already exist.
Named arguments (short and long versions):
-pd
,--prune_directories
: After uploading, remove the raw directory generated bylauncher
-pw
,--prune_wikidump
: After uploading, remove thewikidump.7z
file generated bylauncher
-c
,--collection
: Assign the Internet Archive items to the specified collection-a
,--admin
: Used only if you are an admin of the WikiTeam collection on the Internet Archive-wd
,--wikidump_dir
: The directory to search for dumps. Defaults to.
.-u
,--update
: Update the metadata on an existing Internet Archive item-kf
,--keysfile
: Path to a file containing Internet Archive API keys. Should contain two lines: the access key, then the secret key. Defaults to./keys.txt
.-lf
,--logfile
: Where to store a log of uploaded files (to reduce duplicate work). Defaults touploader-X.txt
, whereX
is the final part of thelistfile
path.
Checking dump integrity
If you want to check the XML dump integrity, type this into your command line to count title, page and revision XML tags:
grep -E '<title(.*?)>' *.xml -c;grep -E '<page(.*?)>' *.xml -c;grep \
"</page>" *.xml -c;grep -E '<revision(.*?)>' *.xml -c;grep "</revision>" *.xml -c
You should see something similar to this (not the actual numbers) - the first three numbers should be the same and the last two should be the same as each other:
580
580
580
5677
5677
If your first three numbers or your last two numbers are different, then, your XML dump is corrupt (it contains one or more unfinished </page>
or </revision>
). This is not common in small wikis, but large or very large wikis may fail at this due to truncated XML pages while exporting and merging. The solution is to remove the XML dump and re-download, a bit boring, and it can fail again...
Contributors
WikiTeam is the Archive Team [GitHub] subcommittee on wikis. It was founded and originally developed by Emilio J. RodrĂguez-Posada, a Wikipedia veteran editor and amateur archivist. Thanks to people who have helped, especially to: Federico Leva, Alex Buie, Scott Boyd, Hydriz, Platonides, Ian McEwen, Mike Dupont, balr0g and PiRSquared17.
MediaWiki Scraper The Python 3 initiative is currently being led by Elsie Hupp, with contributions from Victor Gambier, Thomas Karcher, Janet Cobb, yzqzss, NyaMisty and Rob Kam