Issues
- 0
ner/train_crf_loss.py Error
#9 opened by 666XD - 1
- 4
如何训练自己搜集的语料
#14 opened by ZNZHL - 0
语料没有到百万行也可以训练吗?
#31 opened by RaymondJSu - 1
用其他的数据训练,一个epoch中step数减少太多
#29 opened by niliusha123 - 0
- 2
哥,请问有seq2seq bot的实现原理吗
#22 opened by ily666666 - 1
- 1
pretrained_embedding
#27 opened by ECNUHP - 10
训练好测试显示全是标点符号。。。
#6 opened by shizhediao - 0
- 8
为什么训练起来特别慢
#4 opened by kingdeewang - 0
- 1
antiLM应该仅仅是在testing时候用而非train吧
#23 opened by Leputa - 5
训练出来的模型预测的时候只能出两种结果
#12 opened by charles0-0 - 0
线程问题
#24 opened by charlesXu86 - 0
语料库里面有部分数据乱码
#18 opened by lvzhetx - 3
请问如何调整参数,能达到最优效果呀
#21 opened by ily666666 - 5
- 4
训练好后测试显示乱码
#2 opened by fire717 - 0
beam search generate the same sentence
#17 opened by weiwancheng - 2
seq2seq-ner效果询问
#16 opened by qichaotang - 0
关于多层双向RNN的实现
#15 opened by shuaihuaiyi - 2
loss sometimes jumpes to 200
#11 opened by yzho0907 - 1
Does pre-trained word embedding help?
#10 opened by yzho0907 - 2
how about the training time?
#8 opened by yzho0907 - 1
运行 python3 extract_tmx.py 内存不够
#5 opened by Kiteflyingee - 1
just_another_seq2seq/chatbot/train.py
#1 opened by chinalu