Issues
- 2
How to train vit_tiny (Light HQ-SAM for real-time need): ViT-Tiny HQ-SAM model?
#130 opened by Andy718811 - 1
no such pth file
#137 opened by hz-1 - 1
Does SAM_HQ support batch input?
#135 opened by yunniw2001 - 0
- 0
Train Light HQ-SAM
#144 opened by bobo59 - 6
Request for evaluation code
#113 opened by jameslahm - 0
Transform samhq into tensorrt!
#143 opened by zhujiajian98 - 1
About the console output during validation.
#142 opened by FolkScientistInDL - 0
Input mask prompt
#141 opened by theFilipko - 1
The val_iou and val_boundary_iou is very low
#107 opened by Ghy1209 - 2
- 0
Evaluation for instance segmentation
#139 opened by wliu20 - 0
RuntimeError
#138 opened by hz-1 - 0
Positive negative point inputs for SAM-hq
#136 opened by varadtechx - 3
Can we add support for transformers?
#122 opened by moneyhotspring - 0
Alternative implementation in Refiners
#127 opened by hugojarkoff - 1
About the issue of positive and negative sample pairs
#134 opened by leinusi - 0
can't we get candidates as original sam?
#133 opened by TikaToka - 0
Visualization of Figure 6
#132 opened by tqinger - 0
Why can't we achieve the demonstration effect? There are three tennis rackets in the incoming image, and only one mask is returned
#131 opened by hjj-lmx - 1
Runtime error in Colab
#129 opened by gttae - 0
About interm_embeddings
#128 opened by zzzyzh - 2
Can sam-hq perform breakpoint training: continue training on the already trained sam-hq, or continue training on the official sam-hq weights
#126 opened by YUANMU227 - 0
Segment Anything CPP Wrapper for macOS
#125 opened by ryouchinsa - 0
Training problem: During the training of sam-hq, the iou output of the val set is very high, 0.98; but in eval mode, the iou of the val set is only 0.48
#124 opened by YUANMU227 - 3
- 1
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 1 but got size 13 for tensor number 1 in the list.
#123 opened by stevezkw1998 - 0
RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW
#121 opened by stevezkw1998 - 4
Question about Ablation study
#120 opened by mcshih - 5
- 5
About obtaining uvo data set ap or boundary ap
#114 opened by yaohusama - 3
The purpose of self.embedding_maskfeature
#117 opened by thanhdh-3030 - 2
About the number of detect boxes
#118 opened by yaohusama - 1
How can I test the model
#116 opened by Ryanye2000 - 6
When i tried to train the mode. There is a bug
#100 opened by Ryanye2000 - 1
The ap of lvis measured by the public model is inconsistent with the ap in the paper.
#115 opened by yaohusama - 2
Question: Is it SAM-HQ model applicable for predicting segmentation mask for the input images without boxes, point or label?
#106 opened by mzg0108 - 1
Multi-box input To use multi-box prompt inputs
#108 opened by yliuosu - 0
- 0
dcm file input questions
#110 opened by Ryanye2000 - 0
dcm file imput
#109 opened by Ryanye2000 - 2
some questions about training SAM_HQ
#105 opened by Ghy1209 - 0
Tuning Segment for clothing
#104 opened by crapthings - 5
when i run train.py with one gpu and dataset that sam_hq used, the process stopped and didn't move
#103 opened by Ryanye2000 - 1
Grounded HQ-SAM
#101 opened by halqadasi - 0
Has anyone attempted to deploy SAM-Hq on CVAT (following the deployment of SAM)
#102 opened by superNCS - 1
I want to ask if there is a way that i give the tif image and bbox label as my dataset
#99 opened by Ryanye2000 - 0
- 2
May i ask if there is a way to use dcm image to train the model? If its available, what should i do.
#97 opened by Ryanye2000 - 1
Text Prompt
#96 opened by sebastianopazo1