gmftbyGMFTBY/MultiTurnDialogZoo

Which one is the best one?

timsoraro opened this issue · 5 comments

Hi! Thank you for your work on this repo.

So after all your testing, what is the best architecture in terms of quality of generation?

Sorry for the late response, I have been busy
recently.

After my experiments, I found that DSHRED-WA is the best one. But it also costs lots of time to converge. I recommend you to follow the GPT-2, which is much more hopeful in the future. And I will release a package for transformer based dialog model in about a month.

Thanks for the response! I find GPT-2 pretty good, but I wanted to know if an RNN model the same size could potentially beat it.

Lol, this is also the motivation of this repo. But the transformer based model seems more powerful than the RNN based models. If you have some ideas, we can make some conversations about the improving the RNN based models.

But was there a fair comparison (model with the same number of parameters as GPT-2, trained on the same data)?

Hi, i compare the GPT-2 model (transformers), and use the data to train it from scratch.

GPT-2 model can achieve better distinct score (better diversity). But the BLEU and embedding-based score are similar with these models. Maybe I will leverage human annotations to measure the performance in the future.

Sorry for the late response 😅