YvanYin/Metric3D

Running inference on CPU

AD-lite24 opened this issue · 4 comments

Hi I was wondering if there was any support for CPU inferences. The sample script from hubconf.py doesn't run even if after all the code instructing tensors and models to move to cuda were removed perhaps because of some internal line which still expects CUDA

torch.autocast(device_type='cuda', dtype=torch.bfloat16, enabled=False)

in mono/model/decode_heads/RAFTDepthNormalDPTDecoder5.py

Not sure how many more such instances there are so I wanted to get it clarified. I am sure it will be difficult to run on CPU but still

@AD-lite24 were you able to run it on CPU?

@elvistheyo Nope as I said it would take a lot of effort which might end up wasted anyway. Let me know if you choose to try it out though I could try to assist you with it if possible

I think it will be difficult and not beneficial to infer on cpu. Approximately it will take 1.5~4 minutes to perform one inference for the ViT-L model. Additionally, one important acceleration library xformers does not support cpu as well.
The type torch.bfloat16 is only supported on GPU. The data type for all tensors should be torch.float32 for cpu devices.

It may work with changing bfloat16 to float16. Use following comment.

#81