MedCAT can be used to extract information from Electronic Health Records (EHRs) and link it to biomedical ontologies like SNOMED-CT and UMLS. Paper on arXiv.
- New Feature and Tutorial [7. December 2021]: Exploring Electronic Health Records with MedCAT and Neo4j
- New Minor Release [20. October 2021] Introducing model packs, new faster multiprocessing for large datasets (100M+ documents) and improved MetaCAT.
- New Release [1. August 2021]: Upgraded MedCAT to use spaCy v3, new scispaCy models have to be downloaded - all old CDBs (compatble with MedCAT v1) will work without any changes.
- New Feature and Tutorial [8. July 2021]: Integrating 🤗 Transformers with MedCAT for biomedical NER+L
- General [1. April 2021]: MedCAT is upgraded to v1, unforunately this introduces breaking changes with older models (MedCAT v0.4), as well as potential problems with all code that used the MedCAT package. MedCAT v0.4 is available on the legacy branch and will still be supported until 1. July 2021 (with respect to potential bug fixes), after it will still be available but not updated anymore.
- Paper: What’s in a Summary? Laying the Groundwork for Advances in Hospital-Course Summarization
- (more...)
A demo application is available at MedCAT. This was trained on MIMIC-III and all of SNOMED-CT.
A guide on how to use MedCAT is available in the tutorial folder. Read more about MedCAT on Towards Data Science.
- MedCATtrainer - an interface for building, improving and customising a given Named Entity Recognition and Linking (NER+L) model (MedCAT) for biomedical domain text.
- MedCATservice - implements the MedCAT NLP application as a service behind a REST API.
- iCAT - A docker container for CogStack/MedCAT/HuggingFace development in isolated environments.
- Upgrade pip
pip install --upgrade pip
- Install MedCAT
- For macOS/linux:
pip install --upgrade medcat
- For Windows (see PyTorch documentation):
pip install --upgrade medcat -f https://download.pytorch.org/whl/torch_stable.html
- Quickstart (MedCAT v1.2+):
from medcat.cat import CAT
# Download the model_pack from the models section in the github repo.
cat = CAT.load_model_pack('<path to downloaded zip file>')
# Test it
text = "My simple document with kidney failure"
entities = cat.get_entities(text)
print(entities)
# To run unsupervised training over documents
data_iterator = <your iterator>
cat.train(data_iterator)
#Once done, save the whole model_pack
cat.create_model_pack(<save path>)
- Quick start with separate models:
New Models (MedCAT v1.2+) need the spacy
en_core_web_md
while older ones use the scispacy models, install the one you need or all if not sure. If using model packs you do not need to download these models:
python -m spacy download en_core_web_md
pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_core_sci_md-0.4.0.tar.gz
pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.4.0/en_core_sci_lg-0.4.0.tar.gz
from medcat.vocab import Vocab
from medcat.cdb import CDB
from medcat.cat import CAT
from medcat.meta_cat import MetaCAT
# Load the vocab model you downloaded
vocab = Vocab.load(vocab_path)
# Load the cdb model you downloaded
cdb = CDB.load('<path to the cdb file>')
# Download the mc_status model from the models section below and unzip it
mc_status = MetaCAT.load("<path to the unziped mc_status directory>")
cat = CAT(cdb=cdb, config=cdb.config, vocab=vocab, meta_cats=[mc_status])
# Test it
text = "My simple document with kidney failure"
entities = cat.get_entities(text)
print(entities)
# To run unsupervised training over documents
data_iterator = <your iterator>
cat.train(data_iterator)
#Once done you can make the current pipeline into a model_pack
cat.create_model_pack(<save path>)
- Quick start with to create CDB and vocab models using local data and a config file:
# Run model creator with local config file
python medcat/utils/model_creator.py <path_to_model_creator_config_file>
# Run model creator with example file
python medcat/utils/model_creator.py tests/model_creator/config_example.yml
Model creator parameter | Description |
---|---|
concept_csv_file | Path to file containing UMLS concepts, including primary names, synonyms, types and source ontology. See examples and tests/model_creator/umls_sample.csv for format description and examples. |
unsupervised_training_data_file | Path to file containing text dataset used for spell checking and unsupervised training. |
output_dir | Path to output directory for writing the CDB and vocab models. |
medcat_config_file | Path to optional config file for adjusting MedCAT properties, see configs, medcat/config.py and tests/model_creator/medcat.txt |
unigram_table_size | Optional parameter for setting the initialization size of the unigram table in the vocab model. Default is 100000000, while for testing with a small unsupervised training data file a much smaller size could work. |
A basic trained model is made public. It contains ~ 35K concepts available in MedMentions
.
- MedMentions with Status (Is Concept Affirmed or Negated/Hypothetical) Download
-
Vocabulary Download - Built from MedMentions
-
CDB Download - Built from MedMentions
-
MetaCAT Status Download - Built from a sample from MIMIC-III, detects is an annotation Affirmed (Positve) or Other (Negated or Hypothetical)
(Note: This was compiled from MedMentions and does not have any data from NLM as that data is not publicaly available.)
If you have access to UMLS or SNOMED-CT and can provide some proof (a screenshot of the UMLS profile page is perfect, feel free to redact all information you do not want to share), contact us - we are happy to share the pre-built CDB and Vocab for those databases.
Entity extraction was trained on MedMentions In total it has ~ 35K entites from UMLS
The vocabulary was compiled from Wiktionary In total ~ 800K unique words
A big thank you goes to spaCy and Hugging Face - who made life a million times easier.
@ARTICLE{Kraljevic2021-ln,
title="Multi-domain clinical natural language processing with {MedCAT}: The Medical Concept Annotation Toolkit",
author="Kraljevic, Zeljko and Searle, Thomas and Shek, Anthony and Roguski, Lukasz and Noor, Kawsar and Bean, Daniel and Mascio, Aurelie and Zhu, Leilei and Folarin, Amos A and Roberts, Angus and Bendayan, Rebecca and Richardson, Mark P and Stewart, Robert and Shah, Anoop D and Wong, Wai Keong and Ibrahim, Zina and Teo, James T and Dobson, Richard J B",
journal="Artif. Intell. Med.",
volume=117,
pages="102083",
month=jul,
year=2021,
issn="0933-3657",
doi="10.1016/j.artmed.2021.102083"
}