rxn4chemistry/rxnmapper

what's the value of the per_gpu_train_batch_size in the process of trainning the model?

autodataming opened this issue · 1 comments

training_args = TrainingArguments(
    output_dir="./",
    overwrite_output_dir=True,
    num_train_epochs=5,
    per_gpu_train_batch_size=8,  
    save_steps=10_000,
    save_total_limit=2,
)

what's the value of the per_gpu_train_batch_size when train the model?

We used 16 as per_gpu_train_batch_size and trained on a single GPU.