Training data (aggregate_paraphrase_corpus_0)
Opened this issue ยท 6 comments
Hello Victor.
I would like to thank u first for your contribution.
I am trying to retrain your model but the aggregate_paraphrase_corpus_0 is missing,
Could you share me the files or maybe explain the format of the files ?
Thanks
I need this training data as well.
Could you share me the download link or how to create this format of dataset?
how to get the trainging dataset
@vsuthichai
I would like to have the training data as well, is it possible to share with me privately?
Hi, I know has already passed some time since you were asking these files.
I'm not @vsuthichai but I think I understand how to generate training data.
First thing you need to download the data from internet (just search para-nmt-50m-demo).
Next you need to run the file "preprocess_data.py" passing as parameter the file you downladed called "para-nmt-50m-small.txt".
This will create a bunch of files called "para-nmt-50m-small.txt + ".
Now the last thing you need to do is create the sentence embeddings (I need to find out how to do) and correct all the import strings where all these files are used in the code.
Finally You should be able to train your model. Make sure that the dataset you use is formatted like so "Source sentence" + "\t + "final sentence".
I need now to translate all the dataset to italian and try to train in italian...
Wish me luck
Hi, I know has already passed some time since you were asking these files.
I'm not @vsuthichai but I think I understand how to generate training data.
First thing you need to download the data from internet (just search para-nmt-50m-demo).
Next you need to run the file "preprocess_data.py" passing as parameter the file you downladed called "para-nmt-50m-small.txt".
This will create a bunch of files called "para-nmt-50m-small.txt + ".
Now the last thing you need to do is create the sentence embeddings (I need to find out how to do) and correct all the import strings where all these files are used in the code.Finally You should be able to train your model. Make sure that the dataset you use is formatted like so "Source sentence" + "\t + "final sentence".
I need now to translate all the dataset to italian and try to train in italian...
Wish me luck
thanks for your comment, have you secceeded? I'm doing the similar thing, translate these to chinese.
Hi, I know has already passed some time since you were asking these files.
I'm not @vsuthichai but I think I understand how to generate training data.
First thing you need to download the data from internet (just search para-nmt-50m-demo).
Next you need to run the file "preprocess_data.py" passing as parameter the file you downladed called "para-nmt-50m-small.txt".
This will create a bunch of files called "para-nmt-50m-small.txt + ".
Now the last thing you need to do is create the sentence embeddings (I need to find out how to do) and correct all the import strings where all these files are used in the code.
Finally You should be able to train your model. Make sure that the dataset you use is formatted like so "Source sentence" + "\t + "final sentence".
I need now to translate all the dataset to italian and try to train in italian...
Wish me luckthanks for your comment, have you secceeded? I'm doing the similar thing, translate these to chinese.
hi, I didn't really succeeded. I tried to use the training data and translate into Italian. The thing is that the translation weren't good and the training dataset wasn't big enough (maybe because I only used the para-nmt whereas the author of the repository used a bunch of them). I tried to train anyway but I didn't have good results.