Mq-Zhang1/HOIDiffusion

Using well-trained models in github doesn't work well

Opened this issue · 0 comments

I used the models model_condition.pth and model_sd_finetuned.ckpt you gave in github, but the effect was not very good, I don't know if it is the reason why I commented out the following content:

# sd_model = torch.nn.parallel.DistributedDataParallel(
    #     sd_model,
    #     device_ids=[opt.local_rank],
    #     output_device=opt.local_rank)
    # adapter["model"] = torch.nn.parallel.DistributedDataParallel(
    #     adapter["model"],
    #     device_ids=[opt.local_rank],
    #     output_device=opt.local_rank)
    # cond_model = torch.nn.parallel.DistributedDataParallel(
    #     cond_model,
    #     device_ids=[opt.local_rank],
    #     output_device=opt.local_rank)

as well as:

# train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
    # train_dataloader = torch.utils.data.DataLoader(
    #         train_dataset,
    #         batch_size=opt.bs,
    #         shuffle=(train_sampler is None),
    #         num_workers=1,
    #         pin_memory=True,
    #         sampler=train_sampler)

ToyCar_0_0_4
ToyCar_0_0_5
ToyCar_0_0_6

Please tell me why. Thank you