Add Apple MPS support
Opened this issue · 9 comments
Hey,
In the file autodistill_grounding_dino/helpers.py
, I see that it only supports cuda
and cpu
. Is it possible to add another check for mps
? It could look like something in the line of the following:
if torch.cuda.is_available():
device = torch.device("cuda")
elif torch.backends.mps.is_available():
device = torch.device("mps")
else:
device = torch.device("cpu")
In this way, Apple ARM chips can be leveraged for inference. Let me know if this is desirable. If so, I am happy to make a PR.
Hello there! We'd love this change! Want to file a PR?
@capjamesg thanks for the quick reply! I'll draft a PR today 👍
Could you also include a benchmark to see if this improves speed? I have had mixed results with MPS (it even degrades performance in some cases depending on how few of the model’s ops it supports).
@yeldarby yes, I was just doing that.
I'm running the tests on an Apple M1 Max, 14-inch model from 2021 with 32 GB ram. It seems that CPU performance is quicker compared to MPS.
When I'm training CV models using MPS, I usually notice a significant speedup. But now this is not the case.
Yeh GroundingDINO is probably a particularly perverse case given they wrote some of their own CUDA kernels that can’t possibly be supported on MPS out of the box.
I agree. In that case, I think it's better to keep using CPU when running inference on ARM MacBooks.
If you are still interested in a PR, adding the results of your investigation to the README would be valuable for people working with this package!
@capjamesg sorry for the late reply, work caught up.
I can add some metrics to the README this weekend. I was thinking of using GroundingDINO to extract bounding boxes of numbers using the MNIST dataset. Afterwards, runtimes of mps and cpu can easily be compared.
That would be excellent!