/model

⚗️ Instill Model contains components for AI model orchestration

Primary LanguageMakefileOtherNOASSERTION

Instill Model

GitHub release (latest SemVer including pre-releases) Artifact Hub Discord Integration Test

⚗️ Instill Model manages the AI model-related resources and features working with Instill VDP.

☁️ Instill Cloud offers a fully managed public cloud service, providing you with access to all the fantastic features of unstructured data ETL without the burden of infrastructure management.

Highlights

  • ⚡️ High-performing inference implemented in Go with Triton Inference Server for unleashing the full power of NVIDIA GPU architecture (e.g., concurrency, scheduler, batcher) supporting TensorRT, PyTorch, TensorFlow, ONNX, Python and more.

  • 🖱️ One-click model deployment from GitHub, Hugging Face or cloud storage managed by version control tools like DVC or ArtiVC.

  • 📦 Standardised AI Task output formats to streamline data integration or analysis

Prerequisites

  • macOS or Linux - VDP works on macOS or Linux, but does not support Windows yet.

  • Docker and Docker Compose - VDP uses Docker Compose (specifically, Compose V2 and Compose specification) to run all services at local. Please install the latest stable Docker and Docker Compose before using VDP.

  • yq > v4.x. Please follow the installation guide.

  • (Optional) NVIDIA Container Toolkit - To enable GPU support in VDP, please refer to NVIDIA Cloud Native Documentation to install NVIDIA Container Toolkit. If you'd like to specifically allot GPUs to VDP, you can set the environment variable NVIDIA_VISIBLE_DEVICES. For example, NVIDIA_VISIBLE_DEVICES=0,1 will make the triton-server consume GPU device id 0 and 1 specifically. By default NVIDIA_VISIBLE_DEVICES is set to all to use all available GPUs on the machine.

Quick start

Note Code in the main branch tracks under-development progress towards the next release and may not work as expected. If you are looking for a stable alpha version, please use latest release.

Note The image of model-backend (~2GB) and Triton Inference Server (~23GB) can take a while to pull, but this should be an one-time effort at the first setup.

Execute the following commands to start pre-built images with all the dependencies:

The stable release version

$ git clone -b v0.6.1-alpha https://github.com/instill-ai/model.git && cd model

# Launch all services
$ make all

The latest version for development

$ git clone https://github.com/instill-ai/model.git && cd model

# Launch all services
$ make latest PROFILE=all

🚀 That's it! Once all the services are up with health status, the UI is ready to go at http://localhost:3000. Please find the default login credentials in the documentation.

To shut down all running services:

$ make down

Explore the documentation to discover all available deployment options.

Officially supported models

We curate a list of ready-to-use models. These pre-trained models are from different sources and have been trained and deployed by our team. Want to contribute a new model? Please create an issue, we are happy to add it to the list 👐.

Model Task Sources Framework CPU GPU
MobileNet v2 Image Classification GitHub-DVC ONNX
Vision Transformer (ViT) Image Classification Hugging Face ONNX
YOLOv4 Object Detection GitHub-DVC ONNX
YOLOv7 Object Detection GitHub-DVC ONNX
YOLOv7 W6 Pose Keypoint Detection GitHub-DVC ONNX
PSNet + EasyOCR Optical Character Recognition (OCR) GitHub-DVC ONNX
Mask RCNN Instance Segmentation GitHub-DVC PyTorch
Lite R-ASPP based on MobileNetV3 Semantic Segmentation GitHub-DVC ONNX
Stable Diffusion Text to Image GitHub-DVC, Local-CPU, Local-GPU ONNX
Stable Diffusion XL Text to Image GitHub-DVC PyTorch
Control Net - Canny Image to Image GitHub-DVC PyTorch
Megatron GPT2 Text Generation GitHub-DVC FasterTransformer
Llama2 Text Generation GitHub-DVC vLLM, PyTorch
Code Llama Text Generation GitHub-DVC vLLM
Llama2 Chat Text Generation Chat GitHub-DVC vLLM
MosaicML MPT Text Generation Chat GitHub-DVC vLLM
Mistral Text Generation Chat GitHub-DVC vLLM
Zephyr-7b Text Generation Chat GitHub-DVC PyTorch
Llava Visual Question Answering GitHub-DVC PyTorch

Note: The GitHub-DVC source in the table means importing a model into VDP from a GitHub repository that uses DVC to manage large files.

The Unstructured Data ETL Stack

Explore the open-source unstructured data ETL stack, comprising a collection of source-available projects designed to streamline every aspect of building versatile AI features with unstructured data.


Open Source Unstructured Data ETL Stack

🔮 Instill Core: The foundation for unstructured data ETL stack

Instill Core, or Core, serves as the bedrock upon which open-source unstructured data stack thrive. Essential services such as user management servers, databases, and third-party observability tools find their home here. Instill Core also provides deployment codes to facilitate the seamless launch of both Instill VDP and Instill Model.

💧 Instill VDP: AI pipeline builder for unstructured data

Instill VDP, or VDP (Versatile Data Pipeline), represents a comprehensive unstructured data ETL. Its purpose is to simplify the journey of processing unstructured data from start to finish:

  • Extract: Gather unstructured data from diverse sources, including AI applications, cloud/on-prem storage, and IoT devices.
  • Transform: Utilize AI models to convert raw data into meaningful insights and actionable formats.
  • Load: Efficiently move processed data to warehouses, applications, or other destinations.

Embracing VDP is straightforward, whether you opt for Instill Cloud deployment or self-hosting via Instill Core.

⚗️ Instill Model: Scalable AI model serving and training

Instill Model, or simply Model, emerges as an advanced ModelOps platform. Here, the focus is on empowering you to seamlessly import, train, and serve Machine Learning (ML) models for inference purposes. Like other projects, Instill Model's source code is available for your exploration.

No-Code/Low-Code Access

To access Instill Core and Instill Cloud, we provide:

  • ⛅️ Console for non-developers, empowering them to dive into AI applications and process unstructured data without any coding.
  • 🧰 CLI and SDKs for developers to seamlessly integrate with their existing data stack in minutes.

Documentation

Please check out the documentation website.

Contributing

Please refer to the Contributing Guidelines for more details.

Be Part of Us

We strongly believe in the power of community collaboration and deeply value your contributions. Head over to our Community repository, the central hub for discussing our open-source projects, raising issues, and sharing your brilliant ideas.

License

See the LICENSE file for licensing information.