BUG: Failed in nopython mode pipeline (step: native lowering) float16
Closed this issue · 4 comments
amitsaha commented
Issue Description
When trying to use shap with a model that was trained using float16
(mixed precision), I get the following error:
File [~/anaconda3/envs/tensorflow-216-python-311/lib/python3.11/site-packages/numba/core/datamodel/models.py:370](http://localhost:8888/lab/tree/~/anaconda3/envs/tensorflow-216-python-311/lib/python3.11/site-packages/numba/core/datamodel/models.py#line=369), in FloatModel.__init__(self, dmm, fe_type)
368 be_type = ir.DoubleType()
369 else:
--> 370 raise NotImplementedError(fe_type)
371 super(FloatModel, self).__init__(dmm, fe_type, be_type)
NotImplementedError: Failed in nopython mode pipeline (step: native lowering)
float16
Minimal Reproducible Example
# set the mixed precision
from tensorflow.keras import mixed_precision
mixed_precision.set_global_policy('mixed_float16')
# https://shap.readthedocs.io/en/latest/example_notebooks/image_examples/image_classification/Explain%20ResNet50%20using%20the%20Partition%20explainer.html
import json
from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input
import shap
# load pre-trained model and data
model = ResNet50(weights="imagenet")
X, y = shap.datasets.imagenet50()
# getting ImageNet 1000 class names
url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json"
with open(shap.datasets.cache(url)) as file:
class_names = [v[1] for v in json.load(file).values()]
# print("Number of ImageNet classes:", len(class_names))
# print("Class names:", class_names)
# python function to get model output; replace this function with your own model function.
def f(x):
tmp = x.copy()
preprocess_input(tmp)
return model(tmp)
# define a masker that is used to mask out partitions of the input image.
masker = shap.maskers.Image("inpaint_telea", X[0].shape)
# create an explainer with model and image masker
explainer = shap.Explainer(f, masker, output_names=class_names)
# here we explain two images using 500 evaluations of the underlying model to estimate the SHAP values
shap_values = explainer(
X[1:3], max_evals=100, batch_size=50, outputs=shap.Explanation.argsort.flip[:4]
)
# output with shap values
shap.image_plot(shap_values)
Traceback
NotImplementedError: Failed in nopython mode pipeline (step: native lowering)
float16
Expected Behavior
Shap works as it does when not used with a model trained with float16
mixed precision.
Bug report checklist
- I have checked that this issue has not already been reported.
- I have confirmed this bug exists on the latest release of shap.
- I have confirmed this bug exists on the master branch of shap.
- I'd be interested in making a PR to fix this bug
Installed Versions
0.45.1
CloseChoice commented
Thanks, I can reproduce this issue. Will come back with my analysis
CloseChoice commented
Seems like njit does not support f16, so the solution is to cast to these to f32. I created a PR that fixes your issue. Would be great if you could test it and report your findings back.
Thanks
amitsaha commented
Hi @CloseChoice installed from your fork + branch, i don't see the error any more, thanks!
CloseChoice commented
thanks for testing and reporting back