Pinned Repositories
AB_distillation
Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)
AdamP
Slowing Down the Weight Norm Increase in Momentum-based Optimizers
attention-feature-distillation
Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)
BSS_distillation
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)
HGCAE
HGCAE Pytorch implementation. CVPR2021 accepted.
Knowledge_distillation_methods_wtih_Tensorflow
Knowledge distillation methods implemented with Tensorflow (now there are 8 methods, and will be added more.)
mxfont
Official PyTorch implementation of MX-Font (Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts)
pit
relabel_imagenet
vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
bhheo's Repositories
bhheo/AB_distillation
Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons (AAAI 2019)
bhheo/BSS_distillation
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)
bhheo/Knowledge_distillation_methods_wtih_Tensorflow
Knowledge distillation methods implemented with Tensorflow (now there are 8 methods, and will be added more.)
bhheo/attention-feature-distillation
Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)
bhheo/HGCAE
HGCAE Pytorch implementation. CVPR2021 accepted.
bhheo/mxfont
Official PyTorch implementation of MX-Font (Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts)
bhheo/pit
bhheo/relabel_imagenet
bhheo/vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
bhheo/AdamP
Slowing Down the Weight Norm Increase in Momentum-based Optimizers
bhheo/awesome-knowledge-distillation
Awesome Knowledge Distillation
bhheo/bhheo.github.io
bhheo/ClovaCall
ClovaCall dataset and Pytorch LAS baseline code
bhheo/deit
Official DeiT repository
bhheo/dmfont
Official PyTorch implementation of DM-Font (ECCV 2020)
bhheo/gluon-cv
Gluon CV Toolkit
bhheo/light-weight-refinenet
Light-Weight RefineNet for Real-Time Semantic Segmentation
bhheo/mae
PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
bhheo/MAE-pytorch
Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners
bhheo/overhaul
bhheo/panoptic-fpn-gluon
Panoptic Feature Pyramid Networks
bhheo/PfLayer
Learning Features with Parameter-Free Layers, ICLR 2022
bhheo/pytorch-deeplab-xception
DeepLab v3+ model in PyTorch. Support different backbones.
bhheo/pytorch-image-models
PyTorch image models, scripts, pretrained weights -- (SE)ResNet/ResNeXT, DPN, EfficientNet, MixNet, MobileNet-V3/V2, MNASNet, Single-Path NAS, FBNet, and more
bhheo/rebias
Official Pytorch implementation of ReBias (Learning De-biased Representations with Biased Representations), ICML 2020
bhheo/rexnet
Official Pytorch implementation of ReXNet (Rank eXpansion Network) with pretrained models
bhheo/sam
SAM: Sharpness-Aware Minimization (PyTorch)
bhheo/TResNet
TResNet: High Performance GPU-Dedicated Architecture
bhheo/vidt
bhheo/wsolevaluation
Evaluating Weakly Supervised Object Localization Methods Right