/xtreme1

Xtreme1 - The Next GEN Platform for Multimodal Training Data. #3D annotation, 3D segmentation, lidar-camera fusion annotation, image annotation and RLHF tools are supported!

Primary LanguageTypeScriptApache License 2.0Apache-2.0

Xtreme1 logo

Twitter Online Docs

Intro

Xtreme1 is the world's first open-source platform for Multimodal training data.

Xtreme1 unlocks deep insights into data annotation, curation and ontology management for tackling machine learning challenges in computer vision and LLM. The platform's AI-fueled tools elevate your annotation game to the next level of efficiency, powering your projects in 2D/3D Object Detection, 3D Instance Segmentation and LiDAR-Camera Fusion like never before.

Today, building upon this initiative, we're delighted to present our AI-powered Cloud platformcompletely FREE of charge! This groundbreaking decision marks another important step towards AI democracy, making AI solutions more accessible to everyone.

Documentation

🎆 Welcome aboard! If you have any questions or doubts about features, installation, development, and deployment, you can always refer to our documentation.

📙 Find our docs here!

Find Us

Twitter | Medium | Issues

Key features

Image Annotation (B-box, Segmentation) - YOLOR & RITM Lidar-camera Fusion Annotation - OpenPCDet & AB3DMOT

1️⃣ Supports data labeling for images, 3D LiDAR and 2D/3D Sensor Fusion datasets

2️⃣ Built-in pre-labeling and interactive models support 2D/3D object detection, segmentation and classification

3️⃣ Configurable Ontology Center for general classes (with hierarchies) and attributes for use in your model training

4️⃣ Data management and quality monitoring

5️⃣ Find labeling errors and fix them

6️⃣ Model results visualization to help you evaluate your model

7️⃣ RLHF for Large Language Models 🆕 (beta version)

Image Data Curation (Visualizing & Debug) - MobileNetV3 & openTSNE RLHF Annotation tool for LLM (beta version)

Quick start

Download package

Download the latest release package and unzip it.

wget https://github.com/xtreme1-io/xtreme1/releases/download/v0.8.1/xtreme1-v0.8.1.zip
unzip -d xtreme1-v0.8.1 xtreme1-v0.8.1.zip

Start all services

docker compose up

Visit http://localhost:8190 in the browser (Google Chrome is recommended) to try out Xtreme1!

⚠️ Install built-in models

You need to explicitly specify a model profile to enable model services.

docker compose --profile model up

Enable model services

Make sure you have installed NVIDIA Driver and NVIDIA Container Toolkit. But you do not need to install the CUDA Toolkit, as it already contained in the model image.

# You need set "default-runtime" as "nvidia" in /etc/docker/daemon.json and restart docker to enable NVIDIA Container Toolkit
{
  "runtimes": {
    "nvidia": {
      "path": "nvidia-container-runtime",
      "runtimeArgs": []
    }
  },
  "default-runtime": "nvidia"
}

If you use Docker Desktop + WSL2.0, please find this issue #144 for your reference.

License

This software is licensed under the Apache 2.0 LICENSE. Xtreme1 is a trademark of LF AI & Data Foundation.

Xtreme1 is now hosted in LF AI & Data Foundation as the 1st open source data labeling annotation and visualization project.

If Xtreme1 is part of your development process / project / publication, please cite us ❤️ :

@misc{Xtreme1,
title = {Xtreme1 - The Next GEN Platform For Multisensory Training Data},
year = {2023},
note = {Software available from https://github.com/xtreme1-io/xtreme1/},
url={https://xtreme1.io/},
author = {LF AI & Data Foundation},
}