lizhou-cs/JointNLT

Training Details

Closed this issue · 2 comments

Hello, Could you tell me the training details (GPUs, batch_size, GPU‘s memory consumption, tarining time)

Hi, we run the experiments on 4 3090 GPUs. As shown in the experiments file, we set the batch size as 8, since we sample a grounding patch and two search patches. As for the training time, it needs about three and a half days for the whole training process. The GPU consumption shows 18437MiB / 24576MiB in the NVIDIA system management interface(nvidia-smi).
I hope it helps.

Thank you!