/brise-plandok

Information extraction from text documents of the zoning plan of the City of Vienna

Primary LanguagePythonMIT LicenseMIT

brise-plandok

Information extraction from text documents of the zoning plan of the City of Vienna

Work supported by BRISE-Vienna (UIA04-081), a European Union Urban Innovative Actions project.

The asail2021 tag contains the code in the state presented in our 2021 ASAIL paper. Legacy code can be found in the asail folder.

Table of Contents

Requirements

Install the brise_plandok repository:

pip install .

# To follow changes
pip install -e .

Installing this repository will also install the tuw_nlp repository, a graph-transformation framework. To get to know more, visit https://github.com/recski/tuw-nlp.

Ensure that you have at least Java 8 for the alto library.

Coding guidelines

This repository uses black for code formatting and flake8 for PEP8 compliance. To install the pre-commit hooks run:

pre-commit install

This creates the .git/hooks/pre-commit file, which automatically reformats all the modified files prior to any commit.

Run black separately

pip install black
black .

Run flake8 separately

pip install flake8
flake8 .

Annotated Data Description

See DATA.md.

Extraction service

Start service with your own data

python brise_plandok/services/full_extractor.py -d <DATA_DIR>

Example: python brise_plandok/services/full_extractor.py -d data/train

Start service from Docker

The docker image downloads the data from our cloud storage.

# Build docker image
docker build --tag brise-attr-extraction .

# Start service
docker run -p 5000:5000 brise-attr-extraction

Call service

You can now reach the service in both cases by calling curl http://localhost:5000/<endpoint>/<doc_id>. If the doc_id does not exist, Not found will be returned.

brise-extract-api

curl http://localhost:5000/brise-extract-api/7377

psets

# To get minimal psets
curl http://localhost:5000/psets/7377

# To get full psets
curl http://localhost:5000/psets/7377?full=true

Demo for attribute names only

To run the browser-based demo described in the paper (also available online), first start rule extraction as a service like this:

python brise_plandok/services/attribute_extractor.py

Then run the frontend with this command:

streamlit run brise_plandok/frontend/extract.py

To run the prover of our system, also start the prover service from this repository: https://github.com/adaamko/BRISEprover. This will start a small Flask service on port 5007 that will be used by the demo service.

The demo can then be accessed from your web browser at http://localhost:8501/

Preprocessing

Input data

All steps described below can be run on the sample documents included in this repository under sample_data.

The preprocessed version of all plan documents (as of December 2020) can be downloaded as a single JSON file. If you would like to customize preprocessing, you can also download the raw text documents

NLP Pipeline

Extract section structure from raw text and run NLP pipeline (sentence segmentation, tokenization, dependency parsing):

python brise_plandok/preproc/plandok.py sample_data/txt/*.txt > sample_data/json/sample.jsonl

Attribute extraction task

To run the current best rule-based extraction, see here.

To run experiments with POTATO, see here.

To have a look at our baseline experiments, see here.

Annotation process

For details about the annotation process, see here.

Development

For development details read more here.

References

The rule extraction system is described in the following paper:

Gabor Recski, Björn Lellmann, Adam Kovacs, Allan Hanbury: Explainable rule extraction via semantic graphs (...)

The demo also uses the deontic logic prover described in this paper

The preprocessing pipeline relies on the Stanza library