The number of training steps in the SHP dataset
bonin147 opened this issue · 0 comments
bonin147 commented
In the complete example you provided, a training process of 160,000 steps was conducted using the HH dataset. However, when I trained using the SHP dataset, only 32,500 steps were completed, despite the SHP dataset having twice the size of the training dataset compared to the HH dataset. What could be the reason for this difference?
my code : python -u train.py model=pythia28 datasets=[shp] loss=sft exp_name=shp_sft gradient_accumulation_steps=4 batch_size=24 eval_batch_size=24 trainer=FSDPTrainer sample_during_eval=false model.fsdp_policy_mp=bfloat16