Pinned Repositories
attention-is-all-you-need-pytorch
A PyTorch implementation of the Transformer model in "Attention is All You Need".
AttentiveNAS
code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"
Awesome-Monocular-3D-detection
Awesome Monocular 3D detection
awesome-NeRF
A curated list of awesome neural radiance fields papers
Beta-DARTS
official implementation of β-DARTS: Beta-Decay Regularization for Differentiable Architecture Search (CVPR22 oral).
BEVDet
Official code base of the BEVDet series .
BEVFormer
This is the official implementation of BEVFormer, a camera-only framework for autonomous driving perception, e.g., 3D object detection and semantic map segmentation.
CVPR2022-NAS-competition-Track-2-7th-solution
Leaderboard B ranking
nar
codes for Neural Architecture Ranker and detailed cell information datasets based on NAS-Bench series
Semantic-DARTS
AlbertiPot's Repositories
AlbertiPot/Awesome-Monocular-3D-detection
Awesome Monocular 3D detection
AlbertiPot/BEVDet
Official code base of the BEVDet series .
AlbertiPot/Semantic-DARTS
AlbertiPot/corenet
CoreNet: A library for training deep neural networks
AlbertiPot/darts
Differentiable architecture search for convolutional and recurrent networks
AlbertiPot/darts-pt
[ICLR2021 Outstanding Paper] Rethinking Architecture Selection in Differentiable NAS
AlbertiPot/google-research
Google Research
AlbertiPot/Hardware-Aware-Automated-Machine-Learning
AlbertiPot/hiplot
HiPlot makes understanding high dimensional data easy
AlbertiPot/llama-moe
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training
AlbertiPot/M3ViT
[NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue Liang*, Zhiwen Fan*, Rishov Sarkar, Ziyu Jiang, Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, Zhangyang Wang
AlbertiPot/mmdetection
OpenMMLab Detection Toolbox and Benchmark
AlbertiPot/mmdetection3d
OpenMMLab's next-generation platform for general 3D object detection.
AlbertiPot/MoE-Adapters4CL
Code for paper "Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters" CVPR2024
AlbertiPot/nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
AlbertiPot/nerf-pytorch
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
AlbertiPot/nerfstudio
A collaboration friendly studio for NeRFs
AlbertiPot/OpenMoE
A family of open-sourced Mixture-of-Experts (MoE) Large Language Models
AlbertiPot/PanopticNeRF
[3DV'22] Panoptic NeRF: 3D-to-2D Label Transfer for Panoptic Urban Scene Segmentation
AlbertiPot/Qwen
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
AlbertiPot/Qwen2.5-Coder
Qwen2.5-Coder is the code version of Qwen2.5, the large language model series developed by Qwen team, Alibaba Cloud.
AlbertiPot/SMOKE
SMOKE: Single-Stage Monocular 3D Object Detection via Keypoint Estimation
AlbertiPot/stable-diffusion
A latent text-to-image diffusion model
AlbertiPot/TinyGPT-V
TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones
AlbertiPot/TinyLlama
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
AlbertiPot/ttt-lm-pytorch
Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States
AlbertiPot/unnas
Code for "Are labels necessary for neural architecture search"
AlbertiPot/VTs-Drloc
NeurIPS 2021, Official codes for "Efficient Training of Visual Transformers with Small Datasets".
AlbertiPot/zjuthesis
Zhejiang University Graduation Thesis LaTeX Template
AlbertiPot/ZSCL
Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models