wmfdata
is an Python package for analyzing Wikimedia data on Wikimedia's non-public analytics clients.
Wmfdata's most popular feature is SQL data access. The hive.run
, spark.run
, presto.run
, and mariadb.run
functions allow you to run commands using these different query engines and receive the results as a Pandas dataframe, with just a single line of code.
Other features include:
- Easy generation of Spark sessions using
spark.create_session
(orspark.create_custom_session
if you want to fine-tune the settings) - Loading CSV or TSV files into Hive using
hive.load_csv
- Turning cryptic Kerberos-related errors into clear reminders to renew your Kerberos credentials
For an introduction to using Wmfdata, see the quickstart notebook.
Wmfdata comes preinstalled in the Conda environments used on the analytics clients.
To upgrade to a newer version, use:
pip install --upgrade git+https://github.com/wikimedia/wmfdata-python.git@release
Tasks related to Wmfdata are tracked in Wikimedia Phabricator in the Wmfdata-Python project.
The Wikimedia Foundation's Product Analytics and Data Engineering teams are joint code stewards of Wmfdata. Data Engineering is the ultimate steward of the data access and analytics infrastructure interface portions, while Product Analytics is ultimate steward of the analyst ergonomics portions. The current maintainers of wmfdata are nshahquinn, ottomata, milimetric, nettrom, and xabriel.
If you're a hero who would like to contribute code, we welcome pull requests here on GitHub.