Pinned Repositories
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
genji-jp-samples
gpt-neo_dungeon
Colab notebooks to run a basic AI Dungeon clone using gpt-neo-2.7B
gpt-neo_finetune_2.7B
tokenizer-gpt2-genji
huggingface transformers compatible GPT2Tokenizer files for genji-v2 model
transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
novelai-aspect-ratio-bucketing
Implementation of aspect ratio bucketing for training generative image models as described in: https://blog.novelai.net/novelai-improvements-on-stable-diffusion-e10d38db82ac
novelai-tokenizer
Sentencepiece based BPE tokenizer for English and Japanese language text.
finetunej's Repositories
finetunej/gpt-neo_dungeon
Colab notebooks to run a basic AI Dungeon clone using gpt-neo-2.7B
finetunej/transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
finetunej/gpt-neo_finetune_2.7B
finetunej/genji-jp-samples
finetunej/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
finetunej/tokenizer-gpt2-genji
huggingface transformers compatible GPT2Tokenizer files for genji-v2 model
finetunej/gpt_bpe
GPT2 Byte Pair Encoding implementation in Golang
finetunej/lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
finetunej/memory-efficient-attention
Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch
finetunej/mesh-transformer-jax
Model parallel transformers in JAX and Haiku
finetunej/misc
finetunej/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
finetunej/sentencepiece
Unsupervised text tokenizer for Neural Network-based text generation.