Qiulin-W/SAFA

Running with --cpu does not work for animation_demo.py

tikitong opened this issue · 5 comments

python animation_demo.py --config config/end2end.yaml --checkpoint ./ckpt/final_3DV.tar --source_image_pth ./assets/EM.jpeg --driving_video_pth ./assets/02.mp4 --relative --adapt_scale --find_best_frame --cpu gives me:

/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/skimage/transform/_warps.py:105: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
  warn("The default mode, 'constant', will be changed to 'reflect' in "
/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/skimage/transform/_warps.py:110: UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
  warn("Anti-aliasing will be enabled by default in skimage 0.15 to "
animation_demo.py:32: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(f)
blend_scale:  1
/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp:2895.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/torchvision/models/_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "
/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=MobileNet_V2_Weights.IMAGENET1K_V1`. You can also use `weights=MobileNet_V2_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
creating the FLAME Decoder
/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/pytorch3d/io/obj_io.py:533: UserWarning: Mtl file does not exist: ./modules/data/template.mtl
  warnings.warn(f"Mtl file does not exist: {f}")
[W NNPACK.cpp:51] Could not initialize NNPACK! Reason: Unsupported hardware.
128it [03:03,  1.48s/it]
Best frame: 120
/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/torch/nn/functional.py:4216: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
  "Default grid_sample and affine_grid behavior has changed "
Traceback (most recent call last):
  File "animation_demo.py", line 216, in <module>
    relative=opt.relative, adapt_movement_scale=opt.adapt_scale, cpu=opt.cpu)
  File "animation_demo.py", line 83, in make_animation
    driving_initial = driving[:, :, 0].cuda()
  File "/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/torch/cuda/__init__.py", line 211, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

it seems to come from the animation_demo.py file with the absence of the if not cpu condition in line 83.

How can I modify the file to solve this properly?

See PR #17

See PR #17

Thanks so much for the nice PRs

Thanks so much for the nice PRs

You are welcome. I am considering using SAFA for generating some stimuli video for one of my scientific studies, hence my interest in your software.

It was not easy to get it working for the modern version of the dependency modules, in particular torch and PyTorch3D. By the way, I could not make SAFA work for the newest version of pytorch (1.12.1) and pytorchvision (0.13.1). The latest PyTorch3D (0.7.1, recently released) works fine with pytorch 1.11.0 and pytorchvision 0.12.0.

At any rate, I have created a fork for SAFA. I submitted most of my changes as PRs to the upstream SAFA repository, but not everything. Please take a look at my forked repository and tell me what you think.

Thanks so much for the nice PRs

You are welcome. I am considering using SAFA for generating some stimuli video for one of my scientific studies, hence my interest in your software.

It was not easy to get it working for the modern version of the dependency modules, in particular torch and PyTorch3D. By the way, I could not make SAFA work for the newest version of pytorch (1.12.1) and pytorchvision (0.13.1). The latest PyTorch3D (0.7.1, recently released) works fine with pytorch 1.11.0 and pytorchvision 0.12.0.

At any rate, I have created a fork for SAFA. I submitted most of my changes as PRs to the upstream SAFA repository, but not everything. Please take a look at my forked repository and tell me what you think.

Feel free to make any adaptations for the newest version of PyTorch3D. But please do not use SAFA for any commercial purpose.

Feel free to make any adaptations for the newest version of PyTorch3D. But please do not use SAFA for any commercial purpose.

Sure. As I write before, I am looking for a face animation software that can produce controlled audiovisual stimuli for a perception study. No commercial purpose involved, only Science. And your paper in 3DV 2021 will get properly cited, in the case I will use SAFA in my study.