/defoe

Primary LanguagePythonMIT LicenseMIT

"Defoe" - analysis of historical books and newspapers data

This repository contains code to analyse historical books and newspapers datasets using Apache Spark - .


This repository was forked from the previous defoe repository, in November 2019, for adding new funcionalities to defoe to support:

Both projects have used defoe as the backend to perform several text mining queries across several digital textual historical documents (see bellow).

To learn more about the outcomes of CDCS Text and Data Mining Lab using defoe, we recommend to check the defoe CDCS_Text_Mining_Lab repository.

Note: Since November 2019, all new defoe queries/funcionalities/support have been stored in this repository.

Developer (from November 2019): Rosa Filgueira (EPCC)


Supported datasets

Defoe supports already several datasets. In order to query a daset, defoe needs a list of files and/or directories that conform the dataset. Many of those files (used so far), can be found under the others directory. Those files would need to be modifed, in order to update them with the corresponding paths.

British Library Books

This dataset consists of ~1TB of digitised versions of ~68,000 books from the 16th to the 19th centuries. The books have been scanned into a collection of XML documents. Each book has one XML document one per page plus one XML document for metadata about the book as a whole. The XML documents for each book are held within a compressed, ZIP, file. Each ZIP file holds the XML documents for a single book (the exception is 1880-1889's 000000037_0_1-42pgs__944211_dat.zip which wholds the XML documents for 2 books). These ZIP files occupy ~224GB.

This dataset is available under an open, public domain, licence. See Datasets for content mining and BL Labs Flickr Data: Book data and tag history (Dec 2013 - Dec 2014). For links to the data itself, see Digitised Books largely from the 19th Century. The data is provided by Gale, a division of CENGAGE.

British Library Newspapers

This dataset consists of ~1TB of digitised versions of newspapers from the 18th to the early 20th century. Each newspaper has an associated folder of XML documents where each XML document corresponds to a single issue of the newspaper. Each XML document conforms to a British Library-specific XML schema.

This dataset is available, under licence, from Gale, a division of CENGAGE. The dataset is in 5 parts e.g. Part I: 1800-1900. For links to all 5 parts, see British Library Newspapers.

Times Digital Archive

The code can also handle the Times Digital Archive (TDA).

This dataset is available, under licence, from Gale, a division of CENGAGE.

The code was used with papers from 1785-2009.

Find My Past Newspapers

This dataset is available, under licence, from Find My Past. To run queries with this dataset we can chose either to use:

  • ALTO model: for running queries at page level. These are the same queries for the BL books.
  • FMP model: for running queries at article level.

Papers Past New Zealand and Pacific newspapers

Papers Past provide digitised New Zealand and Pacific newspapers from the 19th and 20th centuries.

Data can be accessed via API calls which return search results in the form of XML documents. Each XML document holds one or more articles.

This dataset is available, under licence, from Papers Past.

National Library of Scotland (NLS) digital collections

National Library of Scotland provide several digitised collections, such as:

Furthermore, we have created 4 Knowledge Graphs to represent the previous NLS digital collections. Details of those KGs available are here.

Note, that ALL collections offered by NLS use the same XML and METS format. Therefore, we can use the defoe NLS model to query any of those collections.

See copyrights restrictions


Get started

Set up (local):

Set up (Urika):

Set up (Cirrus - HPC Cluster):

Set up (VM):

Run queries:

Available queries:

Note: If we have an RDF KG of a collection we have to modify the sparql_data.txt file. Details of KG available are here:

Developers:


Origins and acknowledgements

British Library Books analysis code

The code to analyse the British Library Books dataset has its origins in the first phase of 'Enabling Complex Analysis of Large Scale Digital Collections', a project funded by the Jisc Research Data Spring in 2015.

The project team included: Melissa Terras (UCL), James Baker (British Library), David Beavan (UCL), James Hetherington (UCL), Martin Zaltz Austwick (UCL), Oliver Duke-Williams (UCL), Will Finley (University of Sheffield), Helen O'Neill (UCL), Anne Welsh (UCL).

The code originated from the the GitHub repository UCL-dataspring/cluster-code:

Times Digital Archive and British Library Newspapers analysis code

The code to analyse the Times Digital Archive and British Library Newspapers dataset has its origins in code developed by UCL to analyse the Times Digital Archive. This work took place from 2016-2018.

The project team included: James Hetherington (UCL), Raquel Alegre (UCL), Roma Klapaukh (UCL).

The code originated from the the GitHub repository UCL/i_newspaper_rods:

Analysing humanities data using Cray Urika-GX

Both the above codes were updated and extended by EPCC as part of the Research Engineering Group of the The Alan Turing Institute. The work focused on running both codes on the Alan Turing Institute Cray Urika-GX Service and analysing British Library Books, British Library Newspapers and Papers Past New Zealand and Pacific newspapers datasets.

This work was done in conjunction with Melissa Terras, College of Arts, Humanities and Social Sciences (CAHSS), The University of Edinburgh. The work was funded by Scottish Enterprise as part of the Alan Turing Institute-Scottish Enterprise Data Engineering Program. This work runs from 2018 to 2019 and is ongoing at present, using this repository.

The project team includes: Rosa Filgueira (EPCC), Mike Jackson (EPCC), Anna Roubickova (EPCC).

The code originated from the the GitHub repositories:

Living With Machines

The code to analyse the Find My Past Newspapers dataset and to support blobs on Azure was developed by David Beavan (The Alan Turing Institute) as part of Living With Machines funded by UKRI's Strategic Priorities Fund and led by the Arts and Humanities Research Council (AHRC). Living With Machines runs from 2018-2023 and is ongoing at present using this repository.

The development team includes: David Beavan (Alan Turing Institute), Rosa Filgueira (EPCC), Mike Jackson (EPCC).

The code originated from the the GitHub repositories:


Name

The code is called "defoe" after Daniel Defoe, writer, journalist and pamphleteer of the 17-18 century.


Copyright and licence

Copyright (c) 2015-2019 University College London

Copyright (c) 2018-2019 The University of Edinburgh

All code is available for use and reuse under a MIT Licence. See LICENSE.

Third-party data

defoe/test/books/fixtures/000000037_0_1-42pgs__944211_dat_modified.zip

A modified copy of the file 000000037_0_1-42pgs__944211_dat.zip from OCR text derived from digitised books published 1880 - 1889 in ALTO XML (doi: 10.21250/db11) which is licenced under CC0 1.0 Public Domain.

The modifications are as follows:

000000037_metadata.xml:

-               <MODS:placeTerm type="text">Manchester</MODS:placeTerm>
=>
+               <MODS:placeTerm type="text">Manchester [1823]</MODS:placeTerm>

000000218_metadata.xml:

-               <MODS:placeTerm type="text">London</MODS:placeTerm>
+               <MODS:placeTerm type="text">London [1823]</MODS:placeTerm>

defoe/test/alto/fixtures/000000037_000005.xml

A copy of the file ALTO/000000037_000005.xml from the above file.

defoe/test/papers/fixtures/1912_11_10.xml

A copy of the file newsrods/test/fixtures/2000_04_24.xml from from ucl/i_newspaper_rods. The file has been renamed, most of its content removed, and its data replaced by dummy data.