Issues
- 0
- 2
Why is there a slight gap in the results of my training multiple times, I fixed all possible random seeds
#17 opened by JiuyangDong - 1
MedMamba model test script
#20 opened by blz822 - 0
Single WSI in training?
#19 opened by shubhaminnani - 0
Considering an improvement with mamba2?
#18 opened by zzzendurance - 2
Order of patch sequence in Mamba
#14 opened by anhtienng - 4
Regarding the package import
#15 opened by ZhengtingJiang - 2
Aggregate results of Mamba
#16 opened by anhtienng - 1
About Training GPU Memory
#13 opened by YuqiZhang-Buaa - 2
- 2
Run MambaMIL multiple times, and each time the results were different on Camelyon16
#11 opened by haiqinzhong - 4
Downloading TCGA data
#10 opened by MarioPaps - 4
Patch Size and Evaluation code
#9 opened by shubhaminnani - 12
- 10
CLAM Preprocessing
#8 opened by MarioPaps - 4
- 3
如何传参数获得 512*512 patch (20x)呢?
#5 opened by blz822 - 6
some problems(the number of block for the proposed method, the overfitting issues) from paper.
#6 opened by poult-lab - 2
Training and Inference code
#4 opened by shubhaminnani - 2
What do the outputs represent
#3 opened by haiqinzhong - 3
代码开源时间
#1 opened by junjianli106