Pinned Repositories
4D-Humans
4DHumans: Reconstructing and Tracking Humans with Transformers
6DRepNet
Official Pytorch implementation of 6DRepNet: 6D Rotation representation for unconstrained head pose estimation.
ABINet
Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition
AdvancedLiterateMachinery
A collection of innovative ideas and algorithms towards Advanced Literate Machinery. This project is maintained by the OCR Team in the Language Technology Lab, Alibaba DAMO Academy.
alpaca_chinese_dataset
人工精调的中文对话数据集和一段chatglm的微调代码
amass
Data preparation and loader for AMASS
AniTalker
[ACM MM 2024] This is the official code for "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding"
ansj_seg
ansj分词.ict的真正java实现.分词效果速度都超过开源版的ict. 中文分词,人名识别,词性标注,用户自定义词典
EVA
EVA Series: Vision Foundation Model Fanatics from BAAI
mmdetection
Open MMLab Detection Toolbox with PyTorch 1.0
qinb's Repositories
qinb/6DRepNet
Official Pytorch implementation of 6DRepNet: 6D Rotation representation for unconstrained head pose estimation.
qinb/ABINet
Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition
qinb/AdvancedLiterateMachinery
A collection of innovative ideas and algorithms towards Advanced Literate Machinery. This project is maintained by the OCR Team in the Language Technology Lab, Alibaba DAMO Academy.
qinb/amass
Data preparation and loader for AMASS
qinb/bstro
The official code for BSTRO in paper: Capturing and Inferring Dense Full-Body Human-Scene Contact, CVPR2022
qinb/CLIFF
This repo equips the official CLIFF [ECCV 2022 Oral] with better detector, better tracker. Support multi-person, motion interpolation, motion smooth and SMPLify fitting.
qinb/COAP
[CVPR'22] COAP: Learning Compositional Occupancy of People
qinb/edgaze
This is the official release for paper "Real-Time Gaze Tracking with Event-Driven Eye Segmentation"
qinb/EmoTalk
This is the official repository for EmoTalk: Speech-driven emotional disentanglement for 3D face animation
qinb/FaceXHuBERT
qinb/GTRS
The project is an official implementation of our paper "A Lightweight Graph Transformer Network for Human Mesh Reconstruction from 2D Human Pose".
qinb/HANet
Kinematic-aware Hierarchical Attention Network for Human Pose Estimation in Videos (WACV 2023)
qinb/head-pose-estimation
Head pose estimation by TensorFlow and OpenCV
qinb/INT_HMR_Model
Capturing the Motion of Every Joint: 3D Human Pose and Shape Estimation with Independent Tokens. ICLR2023 (spotlight)
qinb/LVD
Code for the paper Learned Vertex Descent: A New Direction for 3D Human Model Fitting (ECCV 2022)
qinb/mmpose
OpenMMLab Pose Estimation Toolbox and Benchmark.
qinb/mmrazor
OpenMMLab Model Compression Toolbox and Benchmark.
qinb/MotionBERT
PyTorch Implementation of "Learning Human Motion Representations: A Unified Perspective"
qinb/movingcam
qinb/noah-research
Noah Research
qinb/Painter
[CVPR 2023] A Generalist Painter for In-Context Visual Learning (https://arxiv.org/abs/2212.02499)
qinb/RingNet
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
qinb/scene-aware-3d-multi-human
Source code of the paper Scene-Aware 3D Multi-Human Motion Capture, EUROGRAPHICS 2023
qinb/self-instruct
Aligning pretrained language models with instruction data generated by themselves.
qinb/shapy
CVPR 2022 - Official code repository for the paper: Accurate 3D Body Shape Regression using Metric and Semantic Attributes.
qinb/SmoothNet
This is an official implementation for "SmoothNet: A Plug-and-Play Network for Refining Human Poses in Videos" (ECCV 2022)
qinb/StyleHEAT
[ECCV 2022] StyleHEAT: A framework for high-resolution editable talking face generation
qinb/SynergyNet
3DV 2021: Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry
qinb/vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
qinb/wiki
A Wiki on Body-Modelling Technology, maintained by Meshcapade GmbH.