Issues
- 0
who can share the model with me
#183 opened by icegomic - 2
- 1
per-training BlUE always 0.0000
#158 opened by Nanamumuhan - 1
supNMT pre-train problem with multi gpus
#177 opened by Andrewlesson - 0
Where is the file "fairseq-preprocess"
#180 opened by xcssgzs - 1
Does mass implement the translate method?
#179 opened by mqy9787 - 0
how can you get the data for MASS supNMT?
#178 opened by xiuzhilu - 0
- 0
Mass_unsup has no problem on a single GPU, and errors are reported on multiple GPUs
#175 opened by MayDomine - 3
Fail to Reproduce the Result of UnsupMT-EnDe
#141 opened by LibertFan - 0
invalid task choice
#174 opened by yaocheng95 - 0
Translation results on Zh-En pre-trained model
#173 opened by riddlehk - 0
How to create dictionary dict.lg.txt in MASS supNMT
#172 opened by Ashmari - 0
Question about data processing in Unsupervised NMT
#171 opened by ElliottYan - 0
Questions for SupNMT
#170 opened by MSWon - 0
Predictions on XSUM?
#169 opened by danyaljj - 0
Question towards the Pre-trained weight for the Neural Machine Translation under supNMT
#168 opened by MichaelCaohn - 1
- 3
Incorrect dictionary format
#166 opened by abdullahkhilji - 0
How to create dictionary dict.lg.txt
#167 opened by abdullahkhilji - 1
Will BERT+transformer-decoder better than tensor2tensor for text-generation?
#155 opened by guotong1988 - 0
- 1
- 0
.
#151 opened by masonreznov - 2
Problem while running Supervised NMT
#162 opened by RachitBansal - 1
Quick question about "masked_block_start"
#163 opened by Derekkk - 1
Confusion regarding data
#164 opened by kr-sundaram - 3
Question of pretraining text-generation task, it seems that pretraining is not work for a small model?
#161 opened by guotong1988 - 2
Questions on Table 3 of the MASS paper?
#149 opened by Epsilon-Lee - 5
- 1
- 8
How to reload checkpoint for UNMT?
#154 opened by him-mah10 - 2
In text summarization
#138 opened by KawhiZhao - 1
- 0
- 0
Fine Tuning with MBART pretrained model
#148 opened by masonreznov - 2
How to use MASS for Style Transfer?
#143 opened by him-mah10 - 0
- 1
Outputs of summarization task
#140 opened by shahbazsyed - 1
- 2
- 0
- 1
In sup-NMT en2zh: the source sentence's unk data was predicted to other words, how can I make model keep the unk value so I can use post_process_prediction to replace_unk?
#132 opened by rejae - 3
- 1
How to implement fine-tuned model by myself?
#137 opened by kaneyxx - 0
Hyperparameter for low-resource experiment
#136 opened by yukiyakiZ - 2
- 0
Unable to fine-tune pre-trained model
#133 opened by Valahaar - 1
sup-NMT zh-en data processing.
#131 opened by rejae - 2
Can I extend zh-en dict to get rid of <UNK> problem in finetuning zh-en sup-NMT ?
#130 opened by rejae