Xirider/finetune-gpt2xl
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
PythonMIT
Issues
- 0
- 8
- 1
fine tuning GPT-J 6B?
#22 opened by silvacarl2 - 1
IndexError: index out of bounds
#23 opened by GreenTeaBD - 1
- 1
subprocess.CalledProcessError:
#20 opened by Dhanachandra - 4
Out of memory with RTX3090
#19 opened by PyxAI - 1
Suspected optimizer issue causing crashes
#3 opened by kinoc - 1
- 3
- 0
New issue with Pandas
#14 opened by barakw2021 - 1
Crashes with new Transformers version
#13 opened by barakw2021 - 3
Can't change BOS token or EOS token for GPT Neo
#12 opened by mallorbc - 1
- 4
Freezing at "Using /home/user/.cache/torch_extensions as PyTorch extensions root..."
#10 opened by mallorbc - 0
TypeError: unsupported operand type(s) for -: 'float' and 'str' on AWS g4dn.12xlarge
#8 opened by sibeshkar - 2
Resume from checkpoint
#9 opened by ArturTanona - 4
Errors while trying to train with two GPUs
#7 opened by barakw2021 - 3
Unable to proceed, no GPU resources available
#6 opened by bpm246 - 1
Multiple entries csv
#5 opened by kikirizki - 3
- 2
Exception: Installed CUDA version 11.0 does not match the version torch was compiled with 11.1 [SOLUTION]
#1 opened by CupOfGeo