Data Labeling for MLOps & Feedback Loops
autotrain.mp4
🆕 🔥 Train custom transformers models with no-code: Argilla + AutoTrain
🆕 🔥 Deploy Argilla on Spaces
🆕 🔥 Since
1.2.0
Argilla supports vector search for finding the most similar records to a given one. This feature uses vector or semantic search combined with more traditional search (keyword and filter based). Learn more on this deep-dive guide
- Programmatic labeling using rules and weak supervision. Built-in label models (Snorkel, Flyingsquid)
- Bulk-labeling and search-driven annotation
- Iterate on training data with any pre-trained model or library
- Efficiently review and refine annotations in the UI and with Python
- Use Argilla built-in metrics and methods for finding label and data errors (e.g., cleanlab)
- Simple integration with active learning workflows
- Close the gap between production data and data collection activities
- Auto-monitoring for major NLP libraries and pipelines (spaCy, Hugging Face, FlairNLP)
- ASGI middleware for HTTP endpoints
- Argilla Metrics to understand data and model issues, like entity consistency for NER models
- Integrated with Kibana for custom dashboards
- Bring different users and roles into the NLP data and model lifecycles
- Organize data collection, review and monitoring into different workspaces
- Manage workspace access for different users
👋 Welcome! If you have just discovered Argilla this is the best place to get started. Argilla is composed of:
-
Argilla Client: a powerful Python library for reading and writing data into Argilla, using all the libraries you love (transformers, spaCy, datasets, and any other).
-
Argilla Server and UI: the API and UI for data annotation and curation.
To get started you need to:
-
Launch the Argilla Server and UI.
-
Pick a tutorial and start rocking with Argilla using Jupyter Notebooks, or Google Colab.
To get started follow the steps on the Quickstart docs page.
🚒 If you find issues, get direct support from the team and other community members on the Slack Community
-
Open: Argilla is free, open-source, and 100% compatible with major NLP libraries (Hugging Face transformers, spaCy, Stanford Stanza, Flair, etc.). In fact, you can use and combine your preferred libraries without implementing any specific interface.
-
End-to-end: Most annotation tools treat data collection as a one-off activity at the beginning of each project. In real-world projects, data collection is a key activity of the iterative process of ML model development. Once a model goes into production, you want to monitor and analyze its predictions, and collect more data to improve your model over time. Argilla is designed to close this gap, enabling you to iterate as much as you need.
-
User and Developer Experience: The key to sustainable NLP solutions is to make it easier for everyone to contribute to projects. Domain experts should feel comfortable interpreting and annotating data. Data scientists should feel free to experiment and iterate. Engineers should feel in control of data pipelines. Argilla optimizes the experience for these core users to make your teams more productive.
-
Beyond hand-labeling: Classical hand labeling workflows are costly and inefficient, but having humans-in-the-loop is essential. Easily combine hand-labeling with active learning, bulk-labeling, zero-shot models, and weak-supervision in novel data annotation workflows.
We love contributors and have launched a collaboration with JustDiggit to hand out our very own bunds, to help the re-greening of sub-Saharan Africa. To help our community with the creation of contributions, we have created our developer and contributor docs. Additionally, you can always schedule a meeting with our Developer Advocacy team so they can get you up to speed.
Argilla is an open-source MLOps tool for building and managing data for Natural Language Processing.
Argilla is useful if you want to:
-
create a dataset for training a model.
-
evaluate and improve an existing model.
-
monitor an existing model to improve it over time and gather more training data.
You need to have a running instance of Elasticsearch and install the Argilla Python library. The library is used to read and write data into Argilla.
Currently, the only way to upload data into Argilla is by using the Python library.
This is based on the assumption that there's rarely a perfectly prepared dataset in the format expected by the data annotation tool.
Argilla is designed to enable fast iteration for users that are closer to data and models, namely data scientists and NLP/ML/Data engineers.
If you are familiar with libraries like Weights & Biases or MLFlow, you'll find Argilla log
and load
methods intuitive.
That said, Argilla gives you different shortcuts and utils to make loading data into Argilla a breeze, such as the ability to read datasets directly from the Hugging Face Hub.
In summary, the recommended process for uploading data into Argilla would be following:
-
Install Argilla Python library,
-
Open a Jupyter Notebook,
-
Make sure you have a Argilla server instance up and running,
-
Read your source dataset using Pandas, Hugging Face datasets, or any other library,
-
Do any data preparation, pre-processing, or pre-annotation with a pretrained model, and
-
Transform your dataset rows/records into Argilla records and log them into a dataset using
rb.log
. If your dataset is already loaded as a Hugging Face dataset, check theread_datasets
method to make this process even simpler.
The training datasets created with Argilla are model agnostic.
You can choose one of many amazing frameworks to train your model, like transformers, spaCy, flair or sklearn.
Check out our deep dives and our tutorials on how Argilla integrates with these frameworks.
If you want to train a Hugging Face transformer or spaCy NER model, we provide a neat shortcut to prepare your dataset for training.
Yes, you can use the same Elasticsearch instance/cluster for Argilla and other applications. You only need to perform some configuration, check the Advanced installation guide in the docs.
By default, Elasticsearch is quite conservative regarding the disk space it is allowed to use.
If less than 5% of your disk is free, Elasticsearch can enforce a read-only block on every index, and as a consequence, Argilla stops working.
To solve this, you can simply increase the watermark by executing the following command in your terminal:
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'{"persistent": {"cluster.routing.allocation.disk.watermark.flood_stage":"99%"}}'