flairNLP/fabricator

execution speed degrades with increasing sizes of datasets to be annotated

fhamborg opened this issue · 3 comments

during the annotation process of a 10k sized dataset that is to be annotated, the speed degrades. im unsure whether this due to openai's api, e.g., rate limits, or caused by our code, e.g., non-linear complexity, e.g., in the _inner_loop function (?)

I can look into this

Looks rather constant to me over 25k (unlabelled text generation)

Figure_1
figure_2