rmokady/CLIP_prefix_caption

inference

Twilighter9527 opened this issue · 7 comments

when I run the generate_beam for the caption,there are many space in the caption,Do you know why, thank you.
image

Have you cheacked your training data? Your training captions may contain extra spaces which leads to this.

there is not space in train data.And when inference, it's just need the clip_embedding tensor,not need tha caption.isn't it?
image

I find that not all of your training caption end with the '.', since the end token for beamsearch is '.', thus the model may not know when to end inference but keeps predcting the 'space' till reach the max inference length. Yes, there's no need to input the caption when inference, i mean the training data will influence your model's ability when inference. Maybe you can process the training caption to all end with '.' and re-training your model to have a try?

I find that not all of your training caption end with the '.', since the end token for beamsearch is '.', thus the model may not know when to end inference but keeps predcting the 'space' till reach the max inference length. Yes, there's no need to input the caption when inference, i mean the training data will influence your model's ability when inference. Maybe you can process the training caption to all end with '.' and re-training your model to have a try?

thank you. I will try it.

I got the same situation, may I ask that if this problem is solved through "process the training caption to all end with '.'" ?