Issues
- 1
How can I find the attention map
#26 opened by bemjikim - 1
是否支持长文本的识别及空格识别
#25 opened by wzgwy - 3
inference error
#19 opened by hshc123 - 1
Inference only
#22 opened by Ly-Lynn - 3
Training other language
#23 opened by SeungJaeHam - 1
- 2
how to Finetuning in korean (or other language)
#20 opened by simsimee - 2
Error While Inferencing
#18 opened by srinarayanaTantry - 2
Recognize French accents
#17 opened by hung2003oke - 0
Could training Parseq directly on image data containing text content sampled from the 400 million images used to train CLIP potentially yield better results?
#16 opened by Apostatee - 9
Issue with inference !
#15 opened by hung2003oke - 1
Using the LaTeX dataset to train CLIP4STR
#14 opened by Sanster - 9
- 13
Issue with inference
#10 opened by Szransh - 2
Convert to ONNX
#9 opened by YenYunn - 2
Inference time on CPU/GPU
#7 opened by ajkdrag - 1
Is there a way to detect spaces?
#8 opened by morgankohler - 2
clip4str attention graph
#6 opened by dle666 - 12
Bug in run bash script
#2 opened by Teera21 - 9
请问训练时图片的尺寸大小是会resize成224*224还是128*32呢?
#5 opened by Echhoo - 2
training with japanese language
#1 opened by lerndeep - 2
The provided lr scheduler `OneCycleLR` doesn't follow PyTorch's LRScheduler API
#4 opened by gioivuathoi - 1
Error locating target for VL4STR
#3 opened by gioivuathoi