TritiumR
A student majoring in CS at Peking University. I'm interested in 3D computer vision.
Peking University, Turing ClassBeijing, China
TritiumR's Stars
luhr2003/UniGarmentManip
This repository contains the code for the paper "UniGarmentManip: A Unified Framework for Category-Level Garment Manipulation via Dense Visual Correspondence"
video2game/video2game
Code release of Video2Game
thu-ml/CRM
Single Image to 3D Textured Mesh in 10 seconds with Convolutional Reconstruction Model.
sectionZ6/UniDoorManip
This is the official repository of UniDoorManip: Learning Universal Door Manipulation Policy Over Large-scale and Diverse Door Manipulation Environments.
geng-haoran/Simulately
A universal summary of current robotics simulators
pku-minic/online-doc
PKU compiler course online documentation.
chengkaiAcademyCity/EnvAwareAfford
Official repository of the NeurIPS 2023 paper "Learning Environment-Aware Affordance for 3D Articulated Object Manipulation under Occlusions"
Genesis-Embodied-AI/RoboGen
A generative and self-guided robotic agent that endlessly propose and master new skills.
Red-Fairy/ZeroShotDayNightDA
[ICCV 2023 oral] Official repository of the paper "Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation"
fool2fish/dragon-book-exercise-answers
Compilers Principles, Techniques, & Tools (purple dragon book) second edition exercise answers. 编译原理(紫龙书)第2版习题答案。
One-2-3-45/One-2-3-45
[NeurIPS 2023] Official code of "One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization"
TritiumR/DeformableAffordance
The official implementation of the paper "Learning Foresightful Dense Visual Affordance for Deformable Object Manipulation". [ICCV 2023]
daerduoCarey/partnet_dataset
PartNet Dataset Official Release Repo
XingangPan/DragGAN
Official Code for DragGAN (SIGGRAPH 2023)
tianfr/MonoNeRF
[ICCV 2023] This is the official implementation of our paper "MonoNeRF: Learning a Generalizable Dynamic Radiance Field from Monocular Videos".
Angtian/NeMo
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf
canonical-capsules/canonical-capsules
Canonical Capsules: Self-Supervised Capsules in Canonical Pose (NeurIPS 2021)
AUTOMATIC1111/stable-diffusion-webui
Stable Diffusion web UI
facebookresearch/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Julie-tang00/Point-BERT
[CVPR 2022] Pre-Training 3D Point Cloud Transformers with Masked Point Modeling
wangyian-me/AdaAffordCode
remzi-arpacidusseau/ostep-homework
zhenyu-zang/xv6-riscv-book-Chinese
daerduoCarey/where2act
Where2Act: From Pixels to Actions for Articulated 3D Objects
openai/CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Xingyu-Lin/softgym
SoftGym is a set of benchmark environments for deformable object manipulation.
warshallrho/VAT-Mart
Code for our ICLR 2022 paper "VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects"
ZhenbangYou/University-Application--Computer-Science-Graduates-
huggingface/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.