This repository contains a collection of recipes for Prodigy, our scriptable annotation tool for text, images and other data. In order to use this repo, you'll need a license for Prodigy – see this page for more details. For questions and bug reports, please use the Prodigy Support Forum. If you've found a mistake or bug, feel free to submit a pull request.
✨ Important note: The recipes in this repository aren't 100% identical to the built-in recipes shipped with Prodigy. They've been edited to include comments and more information, and some of them have been simplified to make it easier to follow what's going on, and to use them as the basis for a custom recipe.
Once Prodigy is installed, you should be able to run the prodigy
command from
your terminal, either directly or via python -m
:
python -m prodigy
The prodigy
command lists the built-in recipes. To use a custom recipe script,
simply pass the path to the file using the -F
argument:
python -m prodigy ner.teach your_dataset en_core_web_sm ./data.jsonl --label PERSON -F prodigy-recipes/ner/ner_teach.py
You can also use the --help
flag for an overview of the available arguments of a recipe, e.g. prodigy ner.teach -F ner_teach_.py --help
.
You can edit the code in the recipe script to customize how Prodigy behaves.
- Try replacing
prefer_uncertain()
withprefer_high_scores()
. - Try writing a custom sorting function. It just needs to be a generator that
yields a sequence of
example
dicts, given a sequence of(score, example)
tuples. - Try adding a filter that drops some questions from the stream. For instance, try writing a filter that only asks you questions where the entity is two words long.
- Try customizing the
update()
callback, to include extra logging or extra functionality.
Recipe | Description |
---|---|
ner.teach |
Collect the best possible training data for a named entity recognition model with the model in the loop. Based on your annotations, Prodigy will decide which questions to ask next. |
ner.match |
Suggest phrases that match a given patterns file, and mark whether they are examples of the entity you're interested in. The patterns file can include exact strings or token patterns for use with spaCy's Matcher . |
ner.manual |
Mark spans manually by token. Requires only a tokenizer and no entity recognizer, and doesn't do any active learning. Optionally, pre-highlight spans based on patterns. |
ner.fuzzy_manual |
Like ner.manual but use FuzzyMatcher from spaczz library to pre-highlight candidates. |
ner.manual.bert |
Use BERT word piece tokenizer for efficient manual NER annotation for transformer models. |
ner.correct |
Create gold-standard data by correcting a model's predictions manually. This recipe used to be called ner.make_gold . |
ner.silver-to-gold |
Take an existing "silver" dataset with binary accept/reject annotations, merge the annotations to find the best possible analysis given the constraints defined in the annotations, and manually edit it to create a perfect and complete "gold" dataset. |
ner.eval_ab |
Evaluate two NER models by comparing their predictions and building an evaluation set from the stream. |
ner_fuzzy_manual |
Mark spans manually by token with suggestions from spaczz fuzzy matcher pre-highlighted. |
Recipe | Description |
---|---|
textcat.manual |
Manually annotate categories that apply to a text. Supports annotation tasks with single and multiple labels. Multiple labels can optionally be flagged as exclusive. |
textcat.correct |
Correct the textcat model's predictions manually. Predictions above the acceptance threshold will be automatically preselected (0.5 by default). Prodigy will infer whether the categories should be mutualy exclusive based on the component configuration. |
textcat.teach |
Collect the best possible training data for a text classification model with the model in the loop. Based on your annotations, Prodigy will decide which questions to ask next. |
textcat.custom-model |
Use active learning-powered text classification with a custom model. To demonstrate how it works, this demo recipe uses a simple dummy model that "predicts" random scores. But you can swap it out for any model of your choice, for example a text classification model implementation using PyTorch, TensorFlow or scikit-learn. |
Recipe | Description |
---|---|
terms.teach |
Bootstrap a terminology list with word vectors and seeds terms. Prodigy will suggest similar terms based on the word vectors, and update the target vector accordingly. |
Recipe | Description |
---|---|
image.manual |
Manually annotate images by drawing rectangular bounding boxes or polygon shapes on the image. |
image-caption |
Annotate images with captions, pre-populate captions with image captioning model implemented in PyTorch and perform error analysis. |
image.frozenmodel |
Model in loop manual annotation using Tensorflow's Object Detection API. |
image.servingmodel |
Model in loop manual annotation using Tensorflow's Object Detection API. This uses Tensorflow Serving |
image.trainmodel |
Model in loop manual annotation and training using Tensorflow's Object Detection API. |
Recipe | Description |
---|---|
mark |
Click through pre-prepared examples, with no model in the loop. |
choice |
Annotate data with multiple-choice options. The annotated examples will have an additional property "accept": [] mapping to the ID(s) of the selected option(s). |
question_answering |
Annotate question/answer pairs with a custom HTML interface. |
Recipe | Author | Description |
---|---|---|
phrases.teach |
@kabirkhan | Now part of sense2vec . |
phrases.to-patterns |
@kabirkhan | Now part of sense2vec . |
records.link |
@kabirkhan | Link records across multiple datasets using the dedupe library. |
To make it even easier to get started, we've also included a few
example-datasets
, both raw data as well as data containing
annotations created with Prodigy. For examples of token-based match patterns to
use with recipes like ner.teach
or ner.match
, see the
example-patterns
directory.