English | 简体中文
🥳 🚀 Welcome to OpenMMLab Playground , an open-source initiative dedicated to gathering and showcasing amazing projects built with OpenMMLab. Our goal is to provide a central hub for the community to share their innovative solutions and explore the edge of AI technologies.
🥳 🚀 OpenMMLab builds the most influential open-source computer vision algorithm system in the deep learning era, which provides high-performance and out-of-the-box algorithms for detection, segmentation, classification, pose estimation, video understanding, and AIGC. We believe that equipped with OpenMMLab, everyone can build exciting AI-empowered applications and push the limits of what's possible. All you need is a touch of creativity and a willingness to take action.
🥳 🚀 Join the OpenMMLab Playground now and enjoy the power of AI!
Demo | Description | |
---|---|---|
MMDet-SAM | Explore a new way of instance segmentation by combining SAM (Segment Anything Model) with Closed-Set Object Detection, Open-Vocabulary Object Detection, Grounding Object Detection | |
MMRotate-SAM | Join SAM and weakly supervised horizontal box detection to achieve rotated box detection, and say goodbye to the tedious task of annotating rotated boxes from now on! | |
Open-Pose-Detection | Integrate open object detection and various pose estimation algorithms to achieve "Pose All Things" - the ability to estimate the pose of anything and everything! | |
Open-Tracking | Track and segment open categories in videos by marrying open object dtection and MOT. | |
MMOCR-SAM | A solution of Text Detection/Recognition + SAM that segments every text character, with striking text removal and text inpainting demos driven by diffusion models and Gradio! |
We provide a set of applications based on MMDet and SAM. The features include:
- Support all detection models (Closed-Set) included in MMDet, such as Faster R-CNN and DINO, by using SAM for automatic detection and instance segmentation annotation.
- Support Open-Vocabulary detection models, such as Detic, by using SAM for automatic detection and instance segmentation annotation.
- Support Grounding Object Detection models, such as Grounding DINO and GLIP, by using SAM for automatic detection and instance segmentation annotation.
- All models support distributed detection and segmentation evaluation, and automatic COCO JSON export, making it easy for users to evaluate custom data.
Please see README for more information.
We provide a set of applications based on MMRotate and SAM. The features include:
- Support Zero-shot Oriented Object Detection with SAM.
- Perform SAM-based Zero-shot Oriented Object Detection inference on a single image.
Please see README for more information.
We provide a set of applications based on MMPose and open detection. The features include:
- Support open detection and pose estimation model inference for a single image or a folder of images.
- Will soon support inputting different text prompts to achieve pose detection for different object categories in an image.
Please see README for more information.
We provide an approach based on open object detection and utilizing motion information (Kalman filter) for multi-object tracking.
Please see README for more information.
The project is migrated from OCR-SAM, which combines MMOCR with Segment Anything. We provide a set of applications based on MMOCR and SAM. The features include:
- Support End-to-End Text Detection and Recognition, with the ability to segment every text character.
- Striking text removal and text inpainting WebUI demos driven by diffusion models and Gradio.
Please see README for more information.