关于设置“EpochBasedTrainLoop,训练不能停止“的问题
Closed this issue · 2 comments
作者您好,我按照MMsegmentation的格式设置如下:
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=2, val_interval=1) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') default_hooks = dict( timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=1, log_metric_by_epoch=True), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict(type='CheckpointHook', by_epoch=True, interval=1,save_best='mIoU',rule='greater'), sampler_seed=dict(type='DistSamplerSeedHook'), visualization=dict(type='CDVisualizationHook', interval=1, img_shape=(1024, 1024, 3)))
发现并不能在最大的 max_epochs 时终止训练,而且没有保存checkpoints文件,另外我也更改了学习率设置,请问您设置过 EpochBasedTrainLoop 吗?
change InfiniteSampler
to DefaultSampler
in train_dataloader
感谢!根据您的建议已成功