xuebinqin/DIS

Fine_Tunning with isnet-general-use.pth

kabbas570 opened this issue · 11 comments

Hello, thanks for providing the code publicly.
I have a question about fine-tuning the network, can I start training with isnet-general-use.pth weigh file and further fine-tune these weights i.e. transfer learning?

Cheers
Abbas

I found this code from train_valid_inference_main.py

if(hypar["gt_encoder_model"]!=""):
        model_path = hypar["model_path"]+"/"+hypar["gt_encoder_model"]
        if torch.cuda.is_available():
            net.load_state_dict(torch.load(model_path))
            net.cuda()
        else:
            net.load_state_dict(torch.load(model_path,map_location="cpu"))
        print("gt encoder restored from the saved weights ...")
        return net ############

Does it mean it will load the weights for both the encoder and decoder of the model .pth file specified at hypar["model_path"] if we specify hypar["gt_encoder_model"] = ' isnet-general-use.pth'?

maybe the restore_model?

hypar["restore_model"] = "RMBG-1.4.pth" ## name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

@kabbas570

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

Excuse me, has the problem been resolved?

gt_encoder_model
May I ask how to continue training on isnet general use. pth? Have you resolved this?

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

Excuse me, has the problem been resolved?

this parameter works.

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

请问,问题解决了吗?

这个参数有效。

1715829207(1)
This is my input parameter. I have received 200 training sets, and only one model file will be generated during training, which is an improvement compared to isnet general use. pth, but the effect is not very good. What is the reason for this situation

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

请问,问题解决了吗?

这个参数有效。

1715829207(1) This is my input parameter. I have received 200 training sets, and only one model file will be generated during training, which is an improvement compared to isnet general use. pth, but the effect is not very good. What is the reason for this situation

is the loss convergence ?
do you use the best f-score snapshot?

if yes, you need to expand your train set.
or if your case is similar to the origin model, you can check the interm_sup parameter, it freeze the origin parameter.

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

请问,问题解决了吗?

这个参数有效。

1715829207(1) This is my input parameter. I have received 200 training sets, and only one model file will be generated during training, which is an improvement compared to isnet general use. pth, but the effect is not very good. What is the reason for this situation

is the loss convergence ? do you use the best f-score snapshot?

if yes, you need to expand your train set. or if your case is similar to the origin model, you can check the interm_sup parameter, it freeze the origin parameter.

interm_sup = false
We are using the default parameters provided on Git, only changing the dataset passed in

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

请问,问题解决了吗?

这个参数有效。

1715829207(1) This is my input parameter. I have received 200 training sets, and only one model file will be generated during training, which is an improvement compared to isnet general use. pth, but the effect is not very good. What is the reason for this situation

is the loss convergence ? do you use the best f-score snapshot?
if yes, you need to expand your train set. or if your case is similar to the origin model, you can check the interm_sup parameter, it freeze the origin parameter.

interm_sup = false We are using the default parameters provided on Git, only changing the dataset passed in

你加我 v 吧,eW91emlwaXBwaQ==

eW91emlwaXBwaQ==

What kind of account is this

name of the segmentation model weights .pth for resume training process from last stop or for the inferencing

请问,问题解决了吗?

这个参数有效。

1715829207(1)这是我的输入参数。我收到了200个训练集,训练时只会生成一个模型文件,相对于isnet一般使用来说是一个进步。 pth,但是效果不是很好。造成这种情况的原因是什么

是损失收敛吗?你使用最好的 f 分数快照吗?
如果是,您需要扩展您的火车组。或者如果您的情况与原始模型类似,您可以检查参数interm_sup,它会冻结原始参数。

interm_sup = false 我们使用Git上提供的默认参数,仅更改传入的数据集

你加我吧,eW91emlwaXBwaQ==

How to add this