Infernence does not run
Closed this issue · 4 comments
As per test_requirements
branch, inference code fails. It fails to load the model it seems, full error below
(hover) jevjev@jevjev:~/Dropbox/Tia/Hover-net-inference/src$ python infer.py --gpu=0 --mode="roi"
/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
2020-04-06 19:29:56.634572: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2020-04-06 19:29:56.811605: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.6705
pciBusID: 0000:17:00.0
totalMemory: 10.92GiB freeMemory: 10.76GiB
2020-04-06 19:29:56.811640: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2020-04-06 19:29:57.032537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-06 19:29:57.032577: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2020-04-06 19:29:57.032583: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2020-04-06 19:29:57.032762: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/device:GPU:0 with 10405 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:17:00.0, compute capability: 6.1)
[0406 19:29:57 @sessinit.py:294] Loading dictionary from /home/jevjev/hovernet.npz ...
Traceback (most recent call last):
File "infer.py", line 495, in <module>
infer.run()
File "infer.py", line 109, in run
output_names = self.eval_inf_output_tensor_names)
File "/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorpack/predict/config.py", line 79, in __init__
self.input_signature = model.get_input_signature()
File "/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorpack/utils/argtools.py", line 192, in wrapper
value = func(*args, **kwargs)
File "/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorpack/train/model_desc.py", line 37, in get_input_signature
inputs = self.inputs()
File "/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorpack/train/model_desc.py", line 67, in inputs
raise NotImplementedError()
NotImplementedError
The error seems to be coming from tensorpack
although as you will see in the requirements file, I've got the latest version specified.
Hi Jev, let me change the requirements now. I can tell you straight away that tensorpack needs to be 0.9.0.1
, but I will cross check other libraries too
Hey @simongraham you are editing on master
, not on test_requirements
That risks introducing mistakes into master
, see this line for example
Pull request and code review would be likely to catch that
Still fails at inference on roi
In df53734 I've added tensorpack as you specified. However, on a roi it still fails with a following error
Traceback (most recent call last): File "infer.py", line 495, in <module>
infer.run()
File "infer.py", line 131, in run
overlaid_output = visualize_instances(pred_inst, pred_type, img)
File "/home/jevjev/Dropbox/Tia/Hover-net-inference/src/misc/viz_utils.py", line 73, in visualize_instances
cv2.drawContours(inst_canvas_crop, contours[1], -1, class_colour(inst_type), 2)
cv2.error: OpenCV(4.2.0) /io/opencv/modules/imgproc/src/drawing.cpp:2509: error: (-215:Assertion failed) npoints > 0 in function 'drawContours'
Image to reproduce here https://drive.google.com/drive/folders/12H-M4KLMfOIma46Gkhf3PomNvU7Ghrwe?usp=sharing
Great! Seems to run and produce outputs, have not checked the output array files but the overlay looks good.
[0407 11:10:29 @sessinit.py:220] Restoring from dict ...
/home/jevjev/test_roi/ TCGA-OR-A5JX-01Z-00-DX1_Adrenalgland_x29104_y_50887_mag40 TCGA-OR-A5JX-01Z-00-DX1_Adrenalgland_x29104_y_50887_mag40.png
FINISH
Merging #11