Issues
- 2
Code: Compatible to any channels for function patchify and unpatchify
#192 opened by zhongruiHuangDMRI - 0
Bug in `random_masking`?
#199 opened by schmidt-ai - 2
About the gan-loss
#181 opened by cannonli7 - 0
- 2
Is the training procedure result normal? Masked regions do not improve and appear to be random noise.
#190 opened by junzhin - 5
Two different checkpoints for each ViT type
#191 opened by hussein-jafarinia - 0
Reconstruction using normalized pixel values to get unnormalized pixel values?
#197 opened by Aakash3101 - 2
Loss is considerably worse on custom data set with different mean and standard deviation
#179 opened by bpmsilva - 0
训练的代码用最新的timm跑不通
#196 opened by kevin-Abbring - 0
Can run interactive visualization demo with GPU?
#195 opened by HaoqianSong - 2
collab notebook error
#193 opened by barbara42 - 0
How to obtain the complete reconstructed image?
#194 opened by cestbonsuliu - 2
- 2
Could you provide the pretrained checkpoints of both encoder and decoder in MAE?
#188 opened by tangky22 - 0
model.fc_norm is not trained in linear probing
#186 opened by EmreTaha - 4
Not able to import inf from torch._six
#172 opened by tauruswcc - 1
Is possible to enable FP16 or TF32 in pretraining?
#167 opened by Wongboo - 0
visualization attention map.
#187 opened by kimsekeun - 1
Small naming error - masking generation
#173 opened by lilygeorgescu - 1
param_groups_lrd for layer decay
#177 opened by 1119736939 - 1
- 1
- 1
A question about DropPath in pretraining
#166 opened by YangSun22 - 1
I found both LLAMA and MAE used smaller beta2 in ADAMW optimizer during pre-training. Is that any intuition behind such setting?
#184 opened by Novestars - 2
- 0
- 1
patchify and unpatchify
#182 opened by tingyushi - 2
Is the visualization result normal?
#174 opened by WangYZ1608 - 2
pretrain error for import timm
#149 opened by chos1npc - 1
Question about PatchEmbed's Initialization Trick
#152 opened by tae-mo - 0
- 0
Ask for segmentation finetune code
#175 opened by LZhangMorilab - 0
- 1
How to reconstruct some unlabeled images
#170 opened by young169 - 0
- 9
- 1
- 0
Monitor training of custom dataset
#165 opened by DanielShalam - 1
- 1
Training time
#164 opened by penguin1109 - 0
Question regarding figure 5 on the paper
#163 opened by ajboloor - 1
License questions
#158 opened by CA4GitHub - 2
Single machine multi-GPU training
#159 opened by AlexNmSED - 2
Release of MAE decoder
#155 opened by ustcwhy - 1
Implementing VIT Small in MAE
#150 opened by bryanwong17 - 0
Non-square number of patches
#151 opened by dvd42 - 5
Shouldn't the patch embeddings be trained only on the patches that survived masking? (Rather than the original image)
#145 opened by Eduard6421 - 0
How about fine-tuning with MAE auxiliary task?
#148 opened by hellojialee - 0
[Question] Ablation of encoder with mask token
#147 opened by DianCh - 1
[Question] Why non-masked patches look worse in pixel reconstruction example image
#146 opened by austinmw