Issues
- 2
EL-attention GPT-2
#120 opened by neverix - 2
- 2
Running error with PyTorch 1.12.1
#121 opened by StevenTang1998 - 3
Does fastseq support cpu
#105 opened by renmada - 3
Can the fastseq install on windows?
#115 opened by vivid-k - 4
- 1
In which file to read the source code implementation of El-Attention for self-attention
#119 opened by ADaBenxiong - 6
Support for HF's transformers 3.1+
#95 opened by tingofurro - 13
Support for current fairseq 0.10.2
#99 opened by alaneckhardt - 2
NMT models speedup abnormally related to batch size
#106 opened by dearchill - 3
Does it support Tensorflow 2?
#89 opened by s4sarath - 7
RuntimeError: CUDA error: no kernel image is available for execution on the device
#70 opened by sshleifer - 6
Support for Hugging Face's PEGASUS Model
#98 opened by ryangawei - 2
Support for ONNX models & INT8 quantization
#94 opened by bkaruman - 2
Does it support model seq2seq with encoder, and decoder base on lstm, bi-lstm?
#93 opened by trangtv57 - 2
fairseq eval_lm
#87 opened by sshleifer - 2
ModuleNotFoundError: No module named 'fastseq.models'
#78 opened by f-lng - 0
ACTION REQUIRED: Microsoft needs this private repository to complete compliance info
#60 opened by microsoft-github-operations - 3
- 0
Transformers unit tests failure
#39 opened by NickNickGo - 3
- 6
T5 speed
#23 opened by JiushengChen - 1
Errors in test_fairseq_optimizer.py
#28 opened by JiushengChen - 8
- 1
Compatible with torch-1.6.0
#4 opened by feihugis