epfLLM/Megatron-LLM

dose 8 A100 80g enough to finetune 70b llama2 ?

james2v opened this issue · 5 comments

dose 8 A100 80g enough to finetune 70b llama2 ?

I think the minimum is 32 * A100 80GB https://github.com/epfLLM/Megatron-LLM/blob/main/docs/guide/faq.md#what-are-the-basic-hardware-requirements

THANK YOU! I might tune 70b llama2 with lora then.

AleHD commented

Correct, 32x 80GB is the minimum requirement we have been able to achieve when using sequence length of 4k

I think the minimum is 32 * A100 80GB https://github.com/epfLLM/Megatron-LLM/blob/main/docs/guide/faq.md#what-are-the-basic-hardware-requirements

THANK YOU! I might tune 70b llama2 with lora then.

can you run llama2 70b with lora.

I think the minimum is 32 * A100 80GB https://github.com/epfLLM/Megatron-LLM/blob/main/docs/guide/faq.md#what-are-the-basic-hardware-requirements

THANK YOU! I might tune 70b llama2 with lora then.

can you run llama2 70b with lora.

i load it in 8 bit to train it. but i use other hud to run it