Towhee makes it easy to build neural data processing pipelines for AI applications. We provide hundreds of models, algorithms, and transformations that can be used as standard pipeline building blocks. You can use Towhee's Pythonic API to build a prototype of your pipeline and automatically optimize it for production-ready environments.
🎨 Various Modalities: Towhee supports data processing on a variety of modalities, including images, videos, text, audio, molecular structures, etc.
🎓 SOTA Models: Towhee provides SOTA models across 5 fields (CV, NLP, Multimodal, Audio, Medical), 15 tasks, and 140+ model architectures. These include BERT, CLIP, ViT, SwinTransformer, MAE, and data2vec, all pretrained and ready to use.
📦 Data Processing: Towhee also provides traditional methods alongside neural network models to help you build practical data processing pipelines. We have a rich pool of operators available, such as video decoding, audio slicing, frame sampling, feature vector dimension reduction, ensembling, and database operations.
🐍 Pythonic API: Towhee includes a Pythonic method-chaining API for describing custom data processing pipelines. We also support schemas, which makes processing unstructured data as easy as handling tabular data.
v0.9.0 Dec. 2, 2022
- Added one video classification model: Vis4mer
- Added three visual backbones: MCProp, RepLKNet, Shunted Transformer
- Add two code search operators: code_search.codebert, code_search.unixcoder
- Add five image captioning operators: image_captioning.expansionnet-v2, image_captioning.magic, image_captioning.clip_caption_reward, image_captioning.blip, image_captioning.clipcap
- Add five image-text embedding operators: image_text_embedding.albef, image_text_embedding.ru_clip, image_text_embedding.japanese_clip, image_text_embedding.taiyi, image_text_embedding.slip
- Add one machine-translation operator: machine_translation.opus_mt
- Add one filter-tiny-segments operator: video-copy-detection.filter-tiny-segments
- Add an advanced tutorial for audio fingerprinting: Audio Fingerprint II: Music Detection with Temporal Localization (increased accuracy from 84% to 90%)
v0.8.1 Sep. 30, 2022
- Added four visual backbones: ISC, MetaFormer, ConvNext, HorNet
- Add two video de-copy operators: select-video, temporal-network
- Add one image embedding operator specifically designed for image retrieval and video de-copy with SOTA performance on VCSL dataset: isc
- Add one audio embedding operator specified for audio fingerprint: audio_embedding.nnfp (with pretrained weights)
- Add one tutorial for video de-copy: How to Build a Video Segment Copy Detection System
- Add one beginner tutorial for audio fingerprint: Audio Fingerprint I: Build a Demo with Towhee & Milvus
v0.8.0 Aug. 16, 2022
- Towhee now supports generating an Nvidia Triton Server from a Towhee pipeline, with aditional support for GPU image decoding.
- Added one audio fingerprinting model: nnfp
- Added two image embedding models: RepMLP, WaveViT
v0.7.3 Jul. 27, 2022
- Added one multimodal (text/image) model: CoCa.
- Added two video models for grounded situation recognition & repetitive action counting: CoFormer, TransRAC.
- Added two SoTA models for image tasks (image retrieval, image classification, etc.): CVNet, MaxViT
v0.7.1 Jul. 1, 2022
- Added one image embedding model: MPViT.
- Added two video retrieval models: BridgeFormer, collaborative-experts.
- Added FAISS-based ANNSearch operators: to_faiss, faiss_search.
v0.7.0 Jun. 24, 2022
- Added six video understanding/classification models: Video Swin Transformer, TSM, Uniformer, OMNIVORE, TimeSformer, MoViNets.
- Added four video retrieval models: CLIP4Clip, DRL, Frozen in Time, MDMMT.
v0.6.1 May. 13, 2022
- Added three text-image retrieval models: CLIP, BLIP, LightningDOT.
- Added six video understanding/classification models from PyTorchVideo: I3D, C2D, Slow, SlowFast, X3D, MViT.
Towhee requires Python 3.6+. You can install Towhee via pip
:
pip install towhee towhee.models
If you run into any pip-related install problems, please try to upgrade pip with pip install -U pip
.
Let's try your first Towhee pipeline. Below is an example for how to create a CLIP-based cross modal retrieval pipeline with only 15 lines of code.
import towhee
# create image embeddings and build index
(
towhee.glob['file_name']('./*.png')
.image_decode['file_name', 'img']()
.image_text_embedding.clip['img', 'vec'](model_name='clip_vit_base_patch32', modality='image')
.tensor_normalize['vec','vec']()
.to_faiss[('file_name', 'vec')](findex='./index.bin')
)
# search image by text
results = (
towhee.dc['text'](['puppy Corgi'])
.image_text_embedding.clip['text', 'vec'](model_name='clip_vit_base_patch32', modality='text')
.tensor_normalize['vec', 'vec']()
.faiss_search['vec', 'results'](findex='./index.bin', k=3)
.select['text', 'results']()
)
Learn more examples from the Towhee Bootcamp.
Towhee is composed of four main building blocks - Operators
, Pipelines
, DataCollection API
and Engine
.
-
Operators: An operator is a single building block of a neural data processing pipeline. Different implementations of operators are categorized by tasks, with each task having a standard interface. An operator can be a deep learning model, a data processing method, or a Python function.
-
Pipelines: A pipeline is composed of several operators interconnected in the form of a DAG (directed acyclic graph). This DAG can direct complex functionalities, such as embedding feature extraction, data tagging, and cross modal data analysis.
-
DataCollection API: A Pythonic and method-chaining style API for building custom pipelines. A pipeline defined by the DataColltion API can be run locally on a laptop for fast prototyping and then be converted to a docker image, with end-to-end optimizations, for production-ready environments.
-
Engine: The engine sits at Towhee's core. Given a pipeline, the engine will drive dataflow among individual operators, schedule tasks, and monitor compute resource usage (CPU/GPU/etc). We provide a basic engine within Towhee to run pipelines on a single-instance machine and a Triton-based engine for docker containers.
Writing code is not the only way to contribute! Submitting issues, answering questions, and improving documentation are just some of the many ways you can help our growing community. Check out our contributing page for more information.
Special thanks goes to these folks for contributing to Towhee, either on Github, our Towhee Hub, or elsewhere:
Looking for a database to store and index your embedding vectors? Check out Milvus.