tensorops/TransformerX
Flexible Python library providing building blocks (layers) for reproducible Transformers research (Tensorflow ✅, Pytorch 🔜, and Jax 🔜)
PythonMIT
Issues
- 0
[BUG] SinePositionalEncoding
#115 opened by soran-ghaderi - 0
- 0
New feedforward layers
#43 opened by soran-ghaderi - 0
[issue] Create issue template forms
#109 opened by soran-ghaderi - 0
New attention masking layers
#86 opened by soran-ghaderi - 0
Dilated
#87 opened by soran-ghaderi - 0
Docs for the decoder block
#90 opened by soran-ghaderi - 0
New embedding layers
#41 opened by soran-ghaderi - 2
Handle input arguments and raise exceptions
#31 opened by soran-ghaderi - 0
New residual and residual gate layers
#42 opened by soran-ghaderi - 0
New attention layers
#44 opened by soran-ghaderi - 0
Test cases for the layers
#63 opened by soran-ghaderi - 0
[Issue] Quantization support
#108 opened by soran-ghaderi - 1
Documentation - Readme
#29 opened by soran-ghaderi - 0
- 0
Encoder-only Classifier
#113 opened by soran-ghaderi - 0
Masking system and general RC additions
#106 opened by soran-ghaderi - 1
[Enhancement] KV Caching for inference speed
#110 opened by soran-ghaderi - 0
[doc] TransformerEncoder
#102 opened by soran-ghaderi - 0
[Tests] TransformerEncoder
#100 opened by soran-ghaderi - 0
- 0
[tests] TransformerDecoderBlock
#96 opened by soran-ghaderi - 2
Documentation
#30 opened by soran-ghaderi - 0
TransformerEncoder
#57 opened by soran-ghaderi - 0
[tests] TransformerEncoderBlock
#95 opened by soran-ghaderi - 0
Readme updated information
#93 opened by soran-ghaderi - 0
Refactor transformer decoder block
#89 opened by soran-ghaderi - 0
Refacor TransformerEncoderBlock
#77 opened by soran-ghaderi - 0
MultiHeadAttention #31
#37 opened by soran-ghaderi - 0
Refactor DotProductAttention layer
#80 opened by soran-ghaderi - 0
PositionwiseFFN
#82 opened by soran-ghaderi - 0
DotProductAttention
#67 opened by soran-ghaderi - 0
- 0
Refactor softmax_attention
#78 opened by soran-ghaderi - 0
PositionalEncoding
#70 opened by soran-ghaderi - 0
PositionWiseFFN advanced features
#73 opened by soran-ghaderi - 0
AddNorm advanced features
#75 opened by soran-ghaderi - 0
AddNorm
#65 opened by soran-ghaderi - 1
MultiHeadAttention
#64 opened by soran-ghaderi - 0
TransformerDecoder
#59 opened by soran-ghaderi - 0
TransformerDecoderBlock
#52 opened by soran-ghaderi - 0
TransformerEncoderBlock
#51 opened by soran-ghaderi - 0
PositionwiseFFN
#49 opened by soran-ghaderi - 0
- 0
PositionalEncoding
#46 opened by soran-ghaderi - 0
MultiHeadAttention #30
#36 opened by soran-ghaderi - 0
Readme typo
#28 opened by soran-ghaderi - 0
DotProductAttention docs #30
#33 opened by soran-ghaderi - 0
AddNorm docs #30
#32 opened by soran-ghaderi - 0
AddNorm #31
#34 opened by soran-ghaderi