raoyongming/DenseCLIP
[CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting
Python
Issues
- 2
Question about little FPN in ViT-B
#14 opened by Unrealluver - 0
shape related question
#55 opened by SKevin673 - 0
A little question about dimensions
#54 opened by lxr-1204 - 1
some question about pixel-text matching loss
#53 opened by Yu-zhengbo - 1
CUDA out of memory
#52 opened by MrChenNX - 0
Training details
#51 opened by rezaqorbani - 0
- 1
super(SingleStageDetector, self).__init__(init_cfg) TypeError: __init__() takes 1 positional argument but 2 were given
#49 opened by YangJae96 - 1
- 1
- 1
question on the device
#44 opened by duoxiangqinzuo - 1
downloading the pretrained weights
#46 opened by elhamAm - 6
Question about DenseCLIP for Any Visual Backbone
#47 opened by needsee - 2
Question about training process
#48 opened by 0xac9527 - 0
- 3
the results of DenseCLIP on Cityscapes
#22 opened by Richardych - 2
New MMCV and MMSegmentation version
#36 opened by Jtao0818 - 3
code about Pre-model prompting
#11 opened by Ahnsun - 2
Code
#42 opened by kexibudongshi - 1
vit-b-denseclip for semantic segmentation is lost
#41 opened by sunwhw - 1
about single-scale and multi-scale settings
#38 opened by chenhao-zju - 4
What does the different contexts_length setting based on? What is the meaning of separation?
#40 opened by GanPeixin - 2
Loading pretrained CLIP parameters but tuncate context_length in positional_embedding?
#39 opened by laisimiao - 1
An error while saving the model in ONNX format
#32 opened by Virgilzzz - 2
Questions about text input
#35 opened by Virgilzzz - 3
What does "We fix the text encoder during training" mean? Does it mean that the parameters are not updated during training?
#33 opened by sunwhw - 2
Prompt learning via CoOp
#34 opened by anurag-198 - 3
About prompt text
#31 opened by good-demo - 2
- 6
Open set inference without training?
#25 opened by Colin97 - 2
- 2
Misaligned params
#27 opened by sfchen94 - 4
A stupid question about auxiliary loss for Object detection & instance segmentation.
#26 opened by waxnkw - 3
Single GPU error
#24 opened by Virgilzz - 2
Questions about the architecture
#21 opened by xiaoachen98 - 2
Question about ADE20K dataset
#23 opened by wenyuqing - 2
Question about CLIPVisionTransformer
#20 opened by aniki-ly - 8
Can not reproduce the result of DenseCLIP-R50.
#18 opened by Richardych - 3
Question about inference setting
#17 opened by dneirfi - 1
dimension error when load ViT-B weight
#19 opened by xuzhang1199 - 1
what is the value of gamma?
#16 opened by Richardych - 4
- 2
Where can I download this (RN102.pt)?
#12 opened by usr922 - 2
dim unsigned
#10 opened by Ahnsun - 3
multi-gpu error
#9 opened by eternaldolphin - 3
Query on Inference Setting
#8 opened by sauradip - 11
question about eos_indx in model.py
#4 opened by qiulesun - 5
- 6
Some questions of ViT-B-DenseCLIP
#6 opened by lixiangMindSpore - 6
ADE20K batchsize
#5 opened by RainHxj