This repository contains code to analyse historical books and newspapers datasets using Apache Spark - .
This repository was forked from the previous defoe repository, in November 2019, for adding new funcionalities to defoe to support:
- Text and Data Mining project, DDI Project - October 2019 to March 2020.
- CDCS Text and Data Mining Lab - March 2020 to October 2020.
Both projects have used defoe as the backend to perform several text mining queries across several digital textual historical documents (see bellow).
To learn more about the outcomes of CDCS Text and Data Mining Lab using defoe, we recommend to check the defoe CDCS_Text_Mining_Lab repository.
Note: Since November 2019, all new defoe queries/funcionalities/support have been stored in this repository.
Developer (from November 2019): Rosa Filgueira (EPCC)
Defoe supports already several datasets. In order to query a daset, defoe needs a list of files and/or directories that conform the dataset. Many of those files (used so far), can be found under the others directory. Those files would need to be modifed, in order to update them with the corresponding paths.
This dataset consists of ~1TB of digitised versions of ~68,000 books from the 16th to the 19th centuries. The books have been scanned into a collection of XML documents. Each book has one XML document one per page plus one XML document for metadata about the book as a whole. The XML documents for each book are held within a compressed, ZIP, file. Each ZIP file holds the XML documents for a single book (the exception is 1880-1889's 000000037_0_1-42pgs__944211_dat.zip which wholds the XML documents for 2 books). These ZIP files occupy ~224GB.
This dataset is available under an open, public domain, licence. See Datasets for content mining and BL Labs Flickr Data: Book data and tag history (Dec 2013 - Dec 2014). For links to the data itself, see Digitised Books largely from the 19th Century. The data is provided by Gale, a division of CENGAGE.
This dataset consists of ~1TB of digitised versions of newspapers from the 18th to the early 20th century. Each newspaper has an associated folder of XML documents where each XML document corresponds to a single issue of the newspaper. Each XML document conforms to a British Library-specific XML schema.
This dataset is available, under licence, from Gale, a division of CENGAGE. The dataset is in 5 parts e.g. Part I: 1800-1900. For links to all 5 parts, see British Library Newspapers.
The code can also handle the Times Digital Archive (TDA).
This dataset is available, under licence, from Gale, a division of CENGAGE.
The code was used with papers from 1785-2009.
This dataset is available, under licence, from Find My Past. To run queries with this dataset we can chose either to use:
- ALTO model: for running queries at page level. These are the same queries for the BL books.
- FMP model: for running queries at article level.
Papers Past provide digitised New Zealand and Pacific newspapers from the 19th and 20th centuries.
Data can be accessed via API calls which return search results in the form of XML documents. Each XML document holds one or more articles.
This dataset is available, under licence, from Papers Past.
National Library of Scotland provide several digitised collections, such as:
- Encyclopaedia Britanica from the 18th and 20th centuries.
- ChapBooks
- Scottish Gazetteers
Furthermore, we have created 4 Knowledge Graphs to represent the previous NLS digital collections. Details of those KGs available are here.
Note, that ALL collections offered by NLS use the same XML and METS format. Therefore, we can use the defoe NLS model to query any of those collections.
Set up (local):
Set up (Urika):
- Set up Urika environment
- Import data into Urika
- Import British Library Books and Newspapers data into Urika (Alan Turing Institute-Scottish Enterprise Data Engineering Program University of Edinburgh project members only)
Set up (Cirrus - HPC Cluster):
Set up (VM):
Run queries:
- Specify data to query
- Specify Azure data to query
- Run individual queries
- Run multiple queries at once - just one ingestion
- Extracting, Transforming and Saving RDD objects to HDFS as a dataframe
- Loading dataframe from HDFS and performing a query
- Extracting, Transforming and Saving RDD objects to PostgreSQL database
- Loading dataframe from PostgreSQL database and performing a query
- Extracting, Transforming and Saving RDD objects to ElastiSearch
- Loading dataframe from ElasticSearch and performing a query
Available queries:
- ALTO documents (British Library Books and Find My Past Newspapers (at page level))
- British Library Newspapers (these can also be run on the Times Digital Archive)
- FMP newspapers (Find My Past Newspapers datasets at article level)
- Papers Past New Zealand and Pacific newspapers
- Generic XML document queries (these can be run on arbitrary XML documents)
- NLS queries (these can be run on the Encyclopaedia Britannica, Scottish Gazetteers or ChapBooks datasets)
- HDFS queries (running queries against HDFS files - for interoperability across models)
- ES queries (running queries against ES - for interoperability across models)
- PostgreSQL queries (running queries against PostgreSQL database - for interoperability across models)
- NLSArticles queries (just for extracting automatically articles from the Encyclopaedia Britannica dataset)
- SRARQL queries(just for working with RDF Knowledge Graphs)
Note: If we have an RDF KG of a collection we have to modify the sparql_data.txt file. Details of KG available are here:
- http://localhost:3030/total_eb/sparql
- http://localhost:3030/chapbooks_scotland/sparql
- http://localhost:3030/ladies_debating/sparql
- http://localhost:3030/gazetters_scotlad/sparql
Developers:
The code to analyse the British Library Books dataset has its origins in the first phase of 'Enabling Complex Analysis of Large Scale Digital Collections', a project funded by the Jisc Research Data Spring in 2015.
The project team included: Melissa Terras (UCL), James Baker (British Library), David Beavan (UCL), James Hetherington (UCL), Martin Zaltz Austwick (UCL), Oliver Duke-Williams (UCL), Will Finley (University of Sheffield), Helen O'Neill (UCL), Anne Welsh (UCL).
The code originated from the the GitHub repository UCL-dataspring/cluster-code:
- Branch: sparkrods.
- Commit: 08d8bfd0a6cf37f7e4408a9475b38d6747c0cfeb (10 November 2016).
- Developers: James Hetherington (UCL), James Baker (BL)
The code to analyse the Times Digital Archive and British Library Newspapers dataset has its origins in code developed by UCL to analyse the Times Digital Archive. This work took place from 2016-2018.
The project team included: James Hetherington (UCL), Raquel Alegre (UCL), Roma Klapaukh (UCL).
The code originated from the the GitHub repository UCL/i_newspaper_rods:
- Branch: master.
- Commit: ffe58042b7c4655274aa6b99fbdd6f6b0304f7ff (22 June 2018)
- Developers: James Hetherington (UCL), Raquel Alegre (UCL), Roma Klapaukh (UCL).
Both the above codes were updated and extended by EPCC as part of the Research Engineering Group of the The Alan Turing Institute. The work focused on running both codes on the Alan Turing Institute Cray Urika-GX Service and analysing British Library Books, British Library Newspapers and Papers Past New Zealand and Pacific newspapers datasets.
This work was done in conjunction with Melissa Terras, College of Arts, Humanities and Social Sciences (CAHSS), The University of Edinburgh. The work was funded by Scottish Enterprise as part of the Alan Turing Institute-Scottish Enterprise Data Engineering Program. This work runs from 2018 to 2019 and is ongoing at present, using this repository.
The project team includes: Rosa Filgueira (EPCC), Mike Jackson (EPCC), Anna Roubickova (EPCC).
The code originated from the the GitHub repositories:
- alan-turing-institute/cluster-code
- Branch: epcc-sparkrods
- Commit: 00561bff61030fdff131a20fe45ede97897c4743 (21 December 2018)
- alan-turing-institute/i_newspaper_rods
- Branch: epcc-master
- Commit: b9c89764f97987ff1600a35cc3d3bc7bb68da79f (28 January 2019).
- alan-turing-institute/i_newspaper_rods
- Branch: other-archives
- Commit: 43748ccd3839b71347660f4375e9a18c45648118 (13 February 2019).
- Developers: Rosa Filgueira (EPCC), Mike Jackson (EPCC), Anna Roubickova (EPCC).
The code to analyse the Find My Past Newspapers dataset and to support blobs on Azure was developed by David Beavan (The Alan Turing Institute) as part of Living With Machines funded by UKRI's Strategic Priorities Fund and led by the Arts and Humanities Research Council (AHRC). Living With Machines runs from 2018-2023 and is ongoing at present using this repository.
The development team includes: David Beavan (Alan Turing Institute), Rosa Filgueira (EPCC), Mike Jackson (EPCC).
The code originated from the the GitHub repositories:
- DavidBeavan/cluster-code
- Branch: epcc-sparkrods
- Commit: 8e37fdaa0a57e164aecbdadaa4981b5b225a3932 (15 January 2019)
- DavidBeavan/cluster-code
- Branch: azure-sparkrods
- Commit: 8110fb498631edcc5b385029cf5a45dd91d216fc (23 November 2018)
- Developer: David Beavan (Alan Turing Institute)
The code is called "defoe" after Daniel Defoe, writer, journalist and pamphleteer of the 17-18 century.
Copyright (c) 2015-2019 University College London
Copyright (c) 2018-2019 The University of Edinburgh
All code is available for use and reuse under a MIT Licence. See LICENSE.
defoe/test/books/fixtures/000000037_0_1-42pgs__944211_dat_modified.zip
A modified copy of the file 000000037_0_1-42pgs__944211_dat.zip
from OCR text derived from digitised books published 1880 - 1889 in ALTO XML (doi: 10.21250/db11) which is licenced under CC0 1.0 Public Domain.
The modifications are as follows:
000000037_metadata.xml:
- <MODS:placeTerm type="text">Manchester</MODS:placeTerm>
=>
+ <MODS:placeTerm type="text">Manchester [1823]</MODS:placeTerm>
000000218_metadata.xml:
- <MODS:placeTerm type="text">London</MODS:placeTerm>
+ <MODS:placeTerm type="text">London [1823]</MODS:placeTerm>
defoe/test/alto/fixtures/000000037_000005.xml
A copy of the file ALTO/000000037_000005.xml
from the above file.
defoe/test/papers/fixtures/1912_11_10.xml
A copy of the file newsrods/test/fixtures/2000_04_24.xml from from ucl/i_newspaper_rods. The file has been renamed, most of its content removed, and its data replaced by dummy data.