Comparison to GLIDE-small (as shown in Figure 9 in the paper)
benuri opened this issue · 0 comments
benuri commented
Figure 9 in the paper shows results for GLIDE-small which has been trained on the same data as GLIDE-full but has only 300M (like GLIDE-filtered).
When testing captions from Figure 9, like a hedgehog using a calculator
, the results from GLIDE-small are clearly better than the results generated by https://replicate.com/afiaka87/laionide-v3.
What's the difference between GLIDE-small and Laionide-v3, besides the dataset?
- Do both models have exactly the same code?
- Do both models have trained with the same training code for the same number of iterations?