/LLM-distributed-finetune

Tune efficiently any LLM model from HuggingFace using distributed training (multiple GPU) and DeepSpeed. Uses Ray AIR to orchestrate the training on multiple AWS GPU instances

Primary LanguagePythonMIT LicenseMIT

Stargazers

No one’s star this repository yet.