Yang7879's Stars
QingyongHu/RandLA-Net
🔥RandLA-Net in Tensorflow (CVPR 2020, Oral & IEEE TPAMI 2021)
QingyongHu/SensatUrban
🔥Urban-scale point cloud dataset (CVPR 2021 & IJCV 2022)
lin-shuyu/VAE-LSTM-for-anomaly-detection
We propose a VAE-LSTM model as an unsupervised learning approach for anomaly detection in time series.
alextrevithick/GRF
🔥 General Radiance Field (ICCV, 2021)
QingyongHu/SpinNet
[CVPR 2021] SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration
vLAR-group/DM-NeRF
🔥DM-NeRF in PyTorch (ICLR 2023)
vLAR-group/GrowSP
🔥GrowSP in PyTorch (CVPR 2023)
BingCS/AtLoc
AtLoc: Attention Guided Camera Localization
QingyongHu/SQN
SQN in Tensorflow (ECCV'2022)
vLAR-group/OGC
🔥OGC in PyTorch (NeurIPS 2022)
vLAR-group/RangeUDF
🆕RangeUDF: Semantic Surface Reconstruction from 3D Point Clouds
lin-shuyu/ladder-latent-data-distribution-modelling
In this paper, we show that the performance of a learnt generative model is closely related to the model's ability to accurately represent the inferred \textbf{latent data distribution}, i.e. its topology and structural properties. We propose LaDDer to achieve accurate modelling of the latent data distribution in a variational autoencoder framework and to facilitate better representation learning. The central idea of LaDDer is a meta-embedding concept, which uses multiple VAE models to learn an embedding of the embeddings, forming a ladder of encodings. We use a non-parametric mixture as the hyper prior for the innermost VAE and learn all the parameters in a unified variational framework. From extensive experiments, we show that our LaDDer model is able to accurately estimate complex latent distribution and results in improvement in the representation quality.
vLAR-group/UnsupObjSeg
🔥Benchmarking Unsupervised Object Segmentation (NeurIPS 2022)