[BUG]: File not found when running examples
Opened this issue · 26 comments
Description
I am trying to run the examples_detection_segmentation.ipynb
notebook.
First, for some reason, the example images were not downloaded, and the directory to which ROOT_RESOURCES_EXAMPLES
refers is empty. I manually downloaded the images and changed the ROOT_RESOURCES_EXAMPLES
variable to refer to the downloaded photos.
Now to my actual problem. An error occurs, however, when I do the "detection" step:
SWATCHES = []
for image in COLOUR_CHECKER_IMAGES:
for colour_checker_data in detect_colour_checkers_inference(
image, additional_data=True):
swatch_colours, swatch_masks, colour_checker_image = (
colour_checker_data.values)
SWATCHES.append(swatch_colours)
# Using the additional data to plot the colour checker and masks.
masks_i = np.zeros(colour_checker_image.shape)
for i, mask in enumerate(swatch_masks):
masks_i[mask[0]:mask[1], mask[2]:mask[3], ...] = 1
colour.plotting.plot_image(
colour.cctf_encoding(
np.clip(colour_checker_image + masks_i * 0.25, 0, 1)));
Apparently, the results file in a temporary directory is not found:
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[10], line 3
1 SWATCHES = []
2 for image in COLOUR_CHECKER_IMAGES:
----> 3 for colour_checker_data in detect_colour_checkers_inference(
4 image, additional_data=True):
6 swatch_colours, swatch_masks, colour_checker_image = (
7 colour_checker_data.values)
8 SWATCHES.append(swatch_colours)
File[ ~\miniconda3\Lib\site-packages\colour_checker_detection\detection\inference.py:367](http://localhost:8888/lab/tree/Andi/pigmentation/code/~/miniconda3/Lib/site-packages/colour_checker_detection/detection/inference.py#line=366), in detect_colour_checkers_inference(image, samples, cctf_decoding, apply_cctf_decoding, inferencer, inferencer_kwargs, show, additional_data, **kwargs)
364 working_width = settings.working_width
365 working_height = settings.working_height
--> 367 results = inferencer(image, **inferencer_kwargs)
369 if is_string(image):
370 image = read_image(cast(str, image))
File[ ~\miniconda3\Lib\site-packages\colour_checker_detection\detection\inference.py:218](http://localhost:8888/lab/tree/Andi/pigmentation/code/~/miniconda3/Lib/site-packages/colour_checker_detection/detection/inference.py#line=217), in inferencer_default(image, cctf_encoding, apply_cctf_encoding, show)
206 output_results = os.path.join(temp_directory, "output-results.npz")
207 subprocess.call(
208 [ # noqa: S603
209 sys.executable,
(...)
216 + (["--show"] if show else [])
217 )
--> 218 results = np.load(output_results, allow_pickle=True)["results"]
219 finally:
220 shutil.rmtree(temp_directory)
File[ ~\miniconda3\Lib\site-packages\numpy\lib\npyio.py:427](http://localhost:8888/lab/tree/Andi/pigmentation/code/~/miniconda3/Lib/site-packages/numpy/lib/npyio.py#line=426), in load(file, mmap_mode, allow_pickle, fix_imports, encoding, max_header_size)
425 own_fid = False
426 else:
--> 427 fid = stack.enter_context(open(os_fspath(file), "rb"))
428 own_fid = True
430 # Code to distinguish from NumPy binary files and pickles.
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\CYBERT~1\\AppData\\Local\\Temp\\tmpxan6xbhi\\output-results.npz'
The images were correctly plotted in the
previous part (Caption: "Images").
Do you have an idea what I might try? T
Code for Reproduction
No response
Exception Message
No response
Environment Information
No response
Hello @andieich,
I think we need to improve our docs, the README should put an emphasis on initialising the submodules. They are mentioned but we do not specifically stipulate the initialisation part: git submodule update --init --recursive
.
As for the inference failure, it seems like there is an issue with our script somewhere, unsure without more log. What you could try is the Faster Inference with Custom Inferencer at the end of https://github.com/colour-science/colour-checker-detection/blob/develop/colour_checker_detection/examples/examples_detection_inference.ipynb
This bypasses the need for a subprocessed script if you are not concerned about licensing issues. Worth trying at least to confirm that YOLOv8, etc... work at least!
Thanks a lot for your explanations.
- For the
git submodules
: How would I save them at the correct location after installingcolour-checker-detection
withpip
? - I tried the faster inference. To do so, I downloaded the model and saved it in the user folder under .
colour-science\colour-checker-detection
. I adapted the Detection part as follows:
SWATCHES = []
for image in COLOUR_CHECKER_IMAGES:
for colour_checker_data in detect_colour_checkers_inference(
image, inferencer=inferencer_agpl, additional_data=True):
swatch_colours, swatch_masks, colour_checker_image = (
colour_checker_data.values)
SWATCHES.append(swatch_colours)
# Using the additional data to plot the colour checker and masks.
masks_i = np.zeros(colour_checker_image.shape)
for i, mask in enumerate(swatch_masks):
masks_i[mask[0]:mask[1], mask[2]:mask[3], ...] = 1
colour.plotting.plot_image(
colour.cctf_encoding(
np.clip(colour_checker_image + masks_i * 0.25, 0, 1)));
- The code runs, but no chart was detected on the two example photos and
SWATCHES
remains empty.
864x1280 (no detections), 772.8ms
Speed: 23.5ms preprocess, 772.8ms inference, 1.0ms postprocess per image at shape (1, 3, 864, 1280)
0: 864x1280 (no detections), 755.2ms
Speed: 5.6ms preprocess, 755.2ms inference, 0.0ms postprocess per image at shape (1, 3, 864, 1280)
Could you indicate me to what I am doing wrong?
There is no good way to get the images when installing with Pip, I never really thought about it because I always expected people would clone the repository or change the examples: They are quite heavy and it would be unreasonable to have them in the Pypi package.
Out of curiosity, which model did you download?
Our tests are still passing as of yesterday: https://github.com/colour-science/colour-checker-detection/actions/runs/8595516611, I'm starting to think that you might not have pulled the right model: https://huggingface.co/colour-science/colour-checker-detection-models/resolve/main/models/colour-checker-detection-l-seg.pt
Thanks for your reply. I redid the examples and it still doesn't work, so I must do something wrong. Here is the example notebook I modified to use with the weights you referred to. Do you know what might be the issue?
I continued to play around with the package.
I found your very nice description on how to train a model to segment the colour chart and did it for the chart I am using (a colour checker classic laminated for underwater).
After training, the model works very well. However, it's the same problem. When I try to use the code for the YOLO inference from your examples, I get the warning that nothing was detected although the YOLO model itself works well.
I tested it, and it's the same for your photos/model. When I use your model on the examples with YOLO, the chart is detected, when I try to use it within the colour-checker-detection package, no charts are detected.
Do you think it might have something to do with resizing the images? I mean that the package expects another size of the image/detected chart?
The model should resize all the input to 1280px, so I don't think this is a resolution issue, I have the feeling it could be related to the way the image are read. Do you have OpenImageIO or Imageio installed?
I finally had time to continue the script. When I use inferencer_agpl
as described in the examples (detect_colour_checkers_inference(image, inferencer=inferencer_agpl)
), I get no detections. But when I directly use the same model used within the inferencer_agpl
function and the same example images:
model = YOLO(path_to_model)
# Run batched inference on a list of images
results = model(image_paths) # return a list of Results objects
# Process results list
for result in results:
boxes = result.boxes # Boxes object for bounding box outputs
masks = result.masks # Masks object for segmentation masks outputs
keypoints = result.keypoints # Keypoints object for pose outputs
probs = result.probs # Probs object for classification outputs
obb = result.obb # Oriented boxes object for OBB outputs
result.show() # display to screen
The detection works as desired. Do you have an idea what I can try? The model works but not within detect_colour_checkers_inference
. Any idea what I might be doing wrong?
The only thing I changes within the inferencer_agpl
function is that the path to the model is kept in a variable I defined before:
def inferencer_agpl(image, **kwargs):
model = YOLO(path_to_model)
data = []
# NOTE: YOLOv8 expects "BGR" arrays.
if isinstance(image, np.ndarray):
colour.plotting.plot_image(colour.cctf_encoding(image));
image = image[..., ::-1]
image = image.astype(np.float32)
# `device=0` for CUDA GPU
for result in model(image, device="mps"):
if result.boxes is None:
continue
if result.masks is None:
continue
data_boxes = result.boxes.data
data_masks = result.masks.data
for i in range(data_boxes.shape[0]):
data.append(
(
data_boxes[i, 4].cpu().numpy(),
data_boxes[i, 5].cpu().numpy(),
data_masks[i].data.cpu().numpy(),
)
)
return data
But this cannot be the issue since the same path works in model.predict
.
I'm using Imageio
and get the same warning as you in the example script (Warning: "OpenImageIO" related API features are not available, switching to "Imageio"!
).
And I found out one more thing. When I just run a part of the inferencer_agpl
function:
for result in model(image):
if result.boxes is None:
continue
if result.masks is None:
continue
data_boxes = result.boxes.data
data_masks = result.masks.data
it only works if image
is a path to an image, not when it is an image read with colour.cctf_decoding()
, but if I understood your example script correctly, the loaded images are passed to the inferencer_agpl
function:
print("Custom Inferencer")
for image in COLOUR_CHECKER_IMAGES:
start = time.perf_counter()
for colour_checker_data in detect_colour_checkers_inference(
image, inferencer=inferencer_agpl
):
pass
When I manually pass the image paths to inferencer_agpl
, I get this error AttributeError: 'str' object has no attribute 'astype'
, referring to image = image.astype(np.float32)
.
When I comment out the line image = image.astype(np.float32)
, the charts are detected (inferencer_agpl(image_paths)
).
But when I use this adapted function here:
for image in image_paths:
for colour_checker_data in detect_colour_checkers_inference(
image, inferencer=inferencer_agpl):
swatch_colours, swatch_masks, colour_checker_image = (
colour_checker_data.values)
colour_checker_data
is empty. So it seems that I am doing something wrong when passing the results from the YOLO model to the detect_colour_checkers_inference
function.
Finally, I made more progress. In inferencer_agpl
, I removed the line image = image.astype(np.float32)
and the device
parameter in for result in model(image):
. Now the examples run and that model detects the chart in the examples images without problems.
Oh wow, this is so weird! What hardware are you running on? Asking because in the example I set device="mps"
for Metal/macOs which could actually be the issue!
Yes, very weird. I have a MacBook Pro M1, so it should work. To illustrate what I mean, here's the result when I use your model for the images from your example script
Since the confidence of the detection for the first image is the same, I think that the YOLO part runs well and that there's an issue further down...
But this is not the solution to my problem since I deleted the device
line in the code. I think it has to do with the specifications of the colour chart, I tried to adapt them for my chart but it's been mainly trail&error so maybe there's a better way.
I felt that this problem should rather be in the Discussion part of the repository, so I opened up a new discussion there with an example for my chart specifications, my trained model, and script.
I'm on a M1 also! Would it be possible to do a pip list
from your VirtualEnvironment? I would like to try with the same packages than you.
Sure, here's the conda list
:
# Name Version Build Channel
anyio 4.3.0 pyhd8ed1ab_0 conda-forge
aom 3.8.2 h078ce10_0 conda-forge
appnope 0.1.4 pyhd8ed1ab_0 conda-forge
argon2-cffi 23.1.0 pyhd8ed1ab_0 conda-forge
argon2-cffi-bindings 21.2.0 py311heffc1b2_4 conda-forge
arrow 1.3.0 pyhd8ed1ab_0 conda-forge
asttokens 2.4.1 pyhd8ed1ab_0 conda-forge
async-lru 2.0.4 pyhd8ed1ab_0 conda-forge
attrs 23.2.0 pyh71513ae_0 conda-forge
aws-c-auth 0.7.17 h382b9c6_2 conda-forge
aws-c-cal 0.6.11 hd34e5fa_0 conda-forge
aws-c-common 0.9.15 h93a5062_0 conda-forge
aws-c-compression 0.2.18 hd34e5fa_3 conda-forge
aws-c-event-stream 0.4.2 h247c08a_8 conda-forge
aws-c-http 0.8.1 hf9e830b_10 conda-forge
aws-c-io 0.14.7 h33d81b3_6 conda-forge
aws-c-mqtt 0.10.3 h5f4abda_4 conda-forge
aws-c-s3 0.5.7 h606a3d2_1 conda-forge
aws-c-sdkutils 0.1.15 hd34e5fa_3 conda-forge
aws-checksums 0.1.18 hd34e5fa_3 conda-forge
aws-crt-cpp 0.26.6 h13f0230_4 conda-forge
aws-sdk-cpp 1.11.267 h134aaec_6 conda-forge
babel 2.14.0 pyhd8ed1ab_0 conda-forge
beautifulsoup4 4.12.3 pyha770c72_0 conda-forge
bleach 6.1.0 pyhd8ed1ab_0 conda-forge
blosc 1.21.5 hc338f07_0 conda-forge
brotli 1.1.0 hb547adb_1 conda-forge
brotli-bin 1.1.0 hb547adb_1 conda-forge
brotli-python 1.1.0 py311ha891d26_1 conda-forge
brunsli 0.1 h9f76cd9_0 conda-forge
bzip2 1.0.8 h93a5062_5 conda-forge
c-ares 1.28.1 h93a5062_0 conda-forge
c-blosc2 2.12.0 ha57e6be_0 conda-forge
ca-certificates 2024.2.2 hf0a4a13_0 conda-forge
cached-property 1.5.2 hd8ed1ab_1 conda-forge
cached_property 1.5.2 pyha770c72_1 conda-forge
cairo 1.18.0 hd1e100b_0 conda-forge
certifi 2024.2.2 pyhd8ed1ab_0 conda-forge
cffi 1.16.0 py311h4a08483_0 conda-forge
charls 2.4.2 h13dd4ca_0 conda-forge
charset-normalizer 3.3.2 pyhd8ed1ab_0 conda-forge
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
colour 0.1.5 pyhd8ed1ab_1 conda-forge
colour-checker-detection 0.2.0 pypi_0 pypi
colour-science 0.4.4 pypi_0 pypi
comm 0.2.2 pyhd8ed1ab_0 conda-forge
contourpy 1.2.1 py311hcc98501_0 conda-forge
cycler 0.12.1 pyhd8ed1ab_0 conda-forge
cython 3.0.10 py311h92babd0_0 conda-forge
dav1d 1.2.1 hb547adb_0 conda-forge
debugpy 1.8.1 py311h92babd0_0 conda-forge
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
defusedxml 0.7.1 pyhd8ed1ab_0 conda-forge
entrypoints 0.4 pyhd8ed1ab_0 conda-forge
exceptiongroup 1.2.0 pyhd8ed1ab_2 conda-forge
executing 2.0.1 pyhd8ed1ab_0 conda-forge
expat 2.6.2 hebf3989_0 conda-forge
ffmpeg 6.1.1 gpl_h4f1e072_108 conda-forge
filelock 3.13.4 pyhd8ed1ab_0 conda-forge
font-ttf-dejavu-sans-mono 2.37 hab24e00_0 conda-forge
font-ttf-inconsolata 3.000 h77eed37_0 conda-forge
font-ttf-source-code-pro 2.038 h77eed37_0 conda-forge
font-ttf-ubuntu 0.83 h77eed37_1 conda-forge
fontconfig 2.14.2 h82840c6_0 conda-forge
fonts-conda-ecosystem 1 0 conda-forge
fonts-conda-forge 1 0 conda-forge
fonttools 4.51.0 py311h05b510d_0 conda-forge
fqdn 1.5.1 pyhd8ed1ab_0 conda-forge
freetype 2.12.1 hadb7bae_2 conda-forge
fribidi 1.0.10 h27ca646_0 conda-forge
geos 3.12.1 h965bd2d_0 conda-forge
gettext 0.22.5 h8fbad5d_2 conda-forge
gettext-tools 0.22.5 h8fbad5d_2 conda-forge
gflags 2.2.2 hc88da5d_1004 conda-forge
giflib 5.2.2 h93a5062_0 conda-forge
glog 0.7.0 hc6770e3_0 conda-forge
gmp 6.3.0 hebf3989_1 conda-forge
gnutls 3.7.9 hd26332c_0 conda-forge
graphite2 1.3.13 hebf3989_1003 conda-forge
h11 0.14.0 pyhd8ed1ab_0 conda-forge
h2 4.1.0 pyhd8ed1ab_0 conda-forge
harfbuzz 8.3.0 h8f0ba13_0 conda-forge
hdf5 1.14.3 nompi_h5bb55e9_100 conda-forge
hpack 4.0.0 pyh9f0ad1d_0 conda-forge
httpcore 1.0.5 pyhd8ed1ab_0 conda-forge
httpx 0.27.0 pyhd8ed1ab_0 conda-forge
hyperframe 6.0.1 pyhd8ed1ab_0 conda-forge
icu 73.2 hc8870d7_0 conda-forge
idna 3.7 pyhd8ed1ab_0 conda-forge
imagecodecs 2023.9.18 py311h0b517cc_2 conda-forge
imageio 2.34.0 pyh4b66e23_0 conda-forge
imath 3.1.11 h1059232_0 conda-forge
importlib-metadata 7.1.0 pyha770c72_0 conda-forge
importlib_metadata 7.1.0 hd8ed1ab_0 conda-forge
importlib_resources 6.4.0 pyhd8ed1ab_0 conda-forge
ipykernel 6.29.3 pyh3cd1d5f_0 conda-forge
ipython 8.22.2 pyh707e725_0 conda-forge
isoduration 20.11.0 pyhd8ed1ab_0 conda-forge
jasper 4.2.3 h7c0e182_0 conda-forge
jedi 0.19.1 pyhd8ed1ab_0 conda-forge
jinja2 3.1.3 pyhd8ed1ab_0 conda-forge
json5 0.9.25 pyhd8ed1ab_0 conda-forge
jsonpointer 2.4 py311h267d04e_3 conda-forge
jsonschema 4.21.1 pyhd8ed1ab_0 conda-forge
jsonschema-specifications 2023.12.1 pyhd8ed1ab_0 conda-forge
jsonschema-with-format-nongpl 4.21.1 pyhd8ed1ab_0 conda-forge
jupyter-lsp 2.2.5 pyhd8ed1ab_0 conda-forge
jupyter_client 8.6.1 pyhd8ed1ab_0 conda-forge
jupyter_core 5.7.2 py311h267d04e_0 conda-forge
jupyter_events 0.10.0 pyhd8ed1ab_0 conda-forge
jupyter_server 2.14.0 pyhd8ed1ab_0 conda-forge
jupyter_server_terminals 0.5.3 pyhd8ed1ab_0 conda-forge
jupyterlab 4.1.6 pyhd8ed1ab_0 conda-forge
jupyterlab_pygments 0.3.0 pyhd8ed1ab_1 conda-forge
jupyterlab_server 2.26.0 pyhd8ed1ab_0 conda-forge
jxrlib 1.1 h93a5062_3 conda-forge
kiwisolver 1.4.5 py311he4fd1f5_1 conda-forge
krb5 1.21.2 h92f50d5_0 conda-forge
lame 3.100 h1a8c8d9_1003 conda-forge
lazy_loader 0.4 pyhd8ed1ab_0 conda-forge
lcms2 2.16 ha0e7c42_0 conda-forge
lerc 4.0.0 h9a09cb3_0 conda-forge
libabseil 20240116.2 cxx17_hebf3989_0 conda-forge
libaec 1.1.3 hebf3989_0 conda-forge
libarrow 15.0.2 h0fcf22f_2_cpu conda-forge
libarrow-acero 15.0.2 h3f3aa29_2_cpu conda-forge
libarrow-dataset 15.0.2 h3f3aa29_2_cpu conda-forge
libarrow-flight 15.0.2 h224147a_2_cpu conda-forge
libarrow-flight-sql 15.0.2 hb630850_2_cpu conda-forge
libarrow-gandiva 15.0.2 h5fa1bb3_2_cpu conda-forge
libarrow-substrait 15.0.2 hd92e347_2_cpu conda-forge
libasprintf 0.22.5 h8fbad5d_2 conda-forge
libasprintf-devel 0.22.5 h8fbad5d_2 conda-forge
libass 0.17.1 hf7da4fe_1 conda-forge
libavif16 1.0.4 hff135a0_2 conda-forge
libblas 3.9.0 19_osxarm64_openblas conda-forge
libbrotlicommon 1.1.0 hb547adb_1 conda-forge
libbrotlidec 1.1.0 hb547adb_1 conda-forge
libbrotlienc 1.1.0 hb547adb_1 conda-forge
libcblas 3.9.0 19_osxarm64_openblas conda-forge
libcrc32c 1.1.2 hbdafb3b_0 conda-forge
libcurl 8.7.1 h2d989ff_0 conda-forge
libcxx 16.0.6 h4653b0c_0 conda-forge
libdeflate 1.19 hb547adb_0 conda-forge
libedit 3.1.20191231 hc8eb9b7_2 conda-forge
libev 4.33 h93a5062_2 conda-forge
libevent 2.1.12 h2757513_1 conda-forge
libexpat 2.6.2 hebf3989_0 conda-forge
libffi 3.4.2 h3422bc3_5 conda-forge
libgettextpo 0.22.5 h8fbad5d_2 conda-forge
libgettextpo-devel 0.22.5 h8fbad5d_2 conda-forge
libgfortran 5.0.0 13_2_0_hd922786_3 conda-forge
libgfortran5 13.2.0 hf226fd6_3 conda-forge
libglib 2.80.0 hfc324ee_5 conda-forge
libgoogle-cloud 2.22.0 hbebe991_1 conda-forge
libgoogle-cloud-storage 2.22.0 h8a76758_1 conda-forge
libgrpc 1.62.2 h9c18a4f_0 conda-forge
libhwloc 2.10.0 default_h52d8fe8_1000 conda-forge
libiconv 1.17 h0d3ecfb_2 conda-forge
libidn2 2.3.7 h93a5062_0 conda-forge
libintl 0.22.5 h8fbad5d_2 conda-forge
libintl-devel 0.22.5 h8fbad5d_2 conda-forge
libjpeg-turbo 3.0.0 hb547adb_1 conda-forge
liblapack 3.9.0 19_osxarm64_openblas conda-forge
liblapacke 3.9.0 19_osxarm64_openblas conda-forge
libllvm16 16.0.6 haab561b_3 conda-forge
libnghttp2 1.58.0 ha4dd798_1 conda-forge
libopenblas 0.3.24 openmp_hd76b1f2_0 conda-forge
libopencv 4.9.0 headless_py311h18d748c_12 conda-forge
libopenvino 2024.0.0 he6dadac_4 conda-forge
libopenvino-arm-cpu-plugin 2024.0.0 he6dadac_4 conda-forge
libopenvino-auto-batch-plugin 2024.0.0 hc9f00d9_4 conda-forge
libopenvino-auto-plugin 2024.0.0 hc9f00d9_4 conda-forge
libopenvino-hetero-plugin 2024.0.0 hf483cef_4 conda-forge
libopenvino-ir-frontend 2024.0.0 hf483cef_4 conda-forge
libopenvino-onnx-frontend 2024.0.0 h298fcef_4 conda-forge
libopenvino-paddle-frontend 2024.0.0 h298fcef_4 conda-forge
libopenvino-pytorch-frontend 2024.0.0 hebf3989_4 conda-forge
libopenvino-tensorflow-frontend 2024.0.0 h356fca3_4 conda-forge
libopenvino-tensorflow-lite-frontend 2024.0.0 hebf3989_4 conda-forge
libopus 1.3.1 h27ca646_1 conda-forge
libparquet 15.0.2 h5304c63_2_cpu conda-forge
libpng 1.6.43 h091b4b1_0 conda-forge
libprotobuf 4.25.3 hbfab5d5_0 conda-forge
libre2-11 2023.09.01 h7b2c953_2 conda-forge
libsodium 1.0.18 h27ca646_1 conda-forge
libsqlite 3.45.3 h091b4b1_0 conda-forge
libssh2 1.11.0 h7a5bd25_0 conda-forge
libtasn1 4.19.0 h1a8c8d9_0 conda-forge
libthrift 0.19.0 h026a170_1 conda-forge
libtiff 4.6.0 ha8a6c65_2 conda-forge
libunistring 0.9.10 h3422bc3_0 conda-forge
libutf8proc 2.8.0 h1a8c8d9_0 conda-forge
libvpx 1.14.0 h078ce10_0 conda-forge
libwebp-base 1.4.0 h93a5062_0 conda-forge
libxcb 1.15 hf346824_0 conda-forge
libxml2 2.12.6 h0d0cfa8_1 conda-forge
libzlib 1.2.13 h53f4e23_5 conda-forge
libzopfli 1.0.3 h9f76cd9_0 conda-forge
llvm-openmp 15.0.7 h7cfbb63_0 conda-forge
lz4-c 1.9.4 hb7217d7_0 conda-forge
markupsafe 2.1.5 py311h05b510d_0 conda-forge
matplotlib 3.8.4 py311ha1ab1f8_0 conda-forge
matplotlib-base 3.8.4 py311hb58f1d1_0 conda-forge
matplotlib-inline 0.1.7 pyhd8ed1ab_0 conda-forge
mistune 3.0.2 pyhd8ed1ab_0 conda-forge
mpmath 1.3.0 pyhd8ed1ab_0 conda-forge
munkres 1.1.4 pyh9f0ad1d_0 conda-forge
nbclient 0.10.0 pyhd8ed1ab_0 conda-forge
nbconvert-core 7.16.3 pyhd8ed1ab_1 conda-forge
nbformat 5.10.4 pyhd8ed1ab_0 conda-forge
ncurses 6.4.20240210 h078ce10_0 conda-forge
nest-asyncio 1.6.0 pyhd8ed1ab_0 conda-forge
nettle 3.9.1 h40ed0f5_0 conda-forge
networkx 3.3 pyhd8ed1ab_1 conda-forge
notebook-shim 0.2.4 pyhd8ed1ab_0 conda-forge
numpy 1.26.4 py311h7125741_0 conda-forge
opencv 4.9.0 headless_py311h5151cf2_12 conda-forge
openexr 3.2.2 h2c51e1d_1 conda-forge
openh264 2.4.1 hebf3989_0 conda-forge
openjpeg 2.5.2 h9f1df11_0 conda-forge
openssl 3.3.0 h0d3ecfb_0 conda-forge
orc 2.0.0 h3d3088e_0 conda-forge
overrides 7.7.0 pyhd8ed1ab_0 conda-forge
p11-kit 0.24.1 h29577a5_0 conda-forge
packaging 24.0 pyhd8ed1ab_0 conda-forge
pandas 2.2.2 py311hfbe21a1_0 conda-forge
pandocfilters 1.5.0 pyhd8ed1ab_0 conda-forge
parso 0.8.4 pyhd8ed1ab_0 conda-forge
patsy 0.5.6 pyhd8ed1ab_0 conda-forge
pcre2 10.43 h26f9a81_0 conda-forge
pexpect 4.9.0 pyhd8ed1ab_0 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 10.3.0 py311h0b5d0a1_0 conda-forge
pip 24.0 pyhd8ed1ab_0 conda-forge
pixman 0.43.4 hebf3989_0 conda-forge
pkgutil-resolve-name 1.3.10 pyhd8ed1ab_1 conda-forge
platformdirs 4.2.0 pyhd8ed1ab_0 conda-forge
prometheus_client 0.20.0 pyhd8ed1ab_0 conda-forge
prompt-toolkit 3.0.42 pyha770c72_0 conda-forge
psutil 5.9.8 py311h05b510d_0 conda-forge
pthread-stubs 0.4 h27ca646_1001 conda-forge
ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge
pugixml 1.14 h13dd4ca_0 conda-forge
pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge
py-cpuinfo 9.0.0 pyhd8ed1ab_0 conda-forge
py-opencv 4.9.0 headless_py311h7e6d3fa_12 conda-forge
pyarrow 15.0.2 py311h3003323_2_cpu conda-forge
pycocotools 2.0.6 py311h4add359_1 conda-forge
pycparser 2.22 pyhd8ed1ab_0 conda-forge
pygments 2.17.2 pyhd8ed1ab_0 conda-forge
pyobjc-core 10.2 py311h665608e_0 conda-forge
pyobjc-framework-cocoa 10.2 py311h665608e_0 conda-forge
pyparsing 3.1.2 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.11.9 h932a869_0_cpython conda-forge
python-dateutil 2.9.0 pyhd8ed1ab_0 conda-forge
python-fastjsonschema 2.19.1 pyhd8ed1ab_0 conda-forge
python-json-logger 2.0.7 pyhd8ed1ab_0 conda-forge
python-tzdata 2024.1 pyhd8ed1ab_0 conda-forge
python_abi 3.11 4_cp311 conda-forge
pytorch 2.3.0 py3.11_0 pytorch
pytz 2024.1 pyhd8ed1ab_0 conda-forge
pywavelets 1.4.1 py311hb49d859_1 conda-forge
pyyaml 6.0.1 py311heffc1b2_1 conda-forge
pyzmq 26.0.2 py311h93cf3d9_0 conda-forge
rav1e 0.6.6 h69fbcac_2 conda-forge
re2 2023.09.01 h4cba328_2 conda-forge
readline 8.2 h92ec313_1 conda-forge
referencing 0.34.0 pyhd8ed1ab_0 conda-forge
requests 2.31.0 pyhd8ed1ab_0 conda-forge
rfc3339-validator 0.1.4 pyhd8ed1ab_0 conda-forge
rfc3986-validator 0.1.1 pyh9f0ad1d_0 conda-forge
rpds-py 0.18.0 py311ha958965_0 conda-forge
scikit-image 0.22.0 py311h6e08293_2 conda-forge
scipy 1.13.0 py311h4f9446f_0 conda-forge
seaborn 0.13.2 hd8ed1ab_0 conda-forge
seaborn-base 0.13.2 pyhd8ed1ab_0 conda-forge
send2trash 1.8.3 pyh31c8845_0 conda-forge
setuptools 69.5.1 pyhd8ed1ab_0 conda-forge
shapely 2.0.4 py311h0815064_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
snappy 1.1.10 hd04f947_1 conda-forge
sniffio 1.3.1 pyhd8ed1ab_0 conda-forge
soupsieve 2.5 pyhd8ed1ab_1 conda-forge
stack_data 0.6.2 pyhd8ed1ab_0 conda-forge
statsmodels 0.14.1 py311h9ea6feb_0 conda-forge
svt-av1 2.0.0 h078ce10_0 conda-forge
sympy 1.12 pyh04b8f61_3 conda-forge
tbb 2021.12.0 h2ffa867_0 conda-forge
terminado 0.18.1 pyh31c8845_0 conda-forge
tifffile 2024.5.3 pyhd8ed1ab_0 conda-forge
tinycss2 1.2.1 pyhd8ed1ab_0 conda-forge
tk 8.6.13 h5083fa2_1 conda-forge
tomli 2.0.1 pyhd8ed1ab_0 conda-forge
torchvision 0.18.0 py311_cpu pytorch
tornado 6.4 py311h05b510d_0 conda-forge
tqdm 4.66.2 pyhd8ed1ab_0 conda-forge
traitlets 5.14.2 pyhd8ed1ab_0 conda-forge
types-python-dateutil 2.9.0.20240316 pyhd8ed1ab_0 conda-forge
typing-extensions 4.11.0 hd8ed1ab_0 conda-forge
typing_extensions 4.11.0 pyha770c72_0 conda-forge
typing_utils 0.1.0 pyhd8ed1ab_0 conda-forge
tzdata 2024a h0c530f3_0 conda-forge
ultralytics 8.1.47 pyh2965483_0 conda-forge
uri-template 1.3.0 pyhd8ed1ab_0 conda-forge
urllib3 2.2.1 pyhd8ed1ab_0 conda-forge
wcwidth 0.2.13 pyhd8ed1ab_0 conda-forge
webcolors 1.13 pyhd8ed1ab_0 conda-forge
webencodings 0.5.1 pyhd8ed1ab_2 conda-forge
websocket-client 1.7.0 pyhd8ed1ab_0 conda-forge
wheel 0.43.0 pyhd8ed1ab_1 conda-forge
x264 1!164.3095 h57fd34a_2 conda-forge
x265 3.5 hbc6ce65_3 conda-forge
xorg-libxau 1.0.11 hb547adb_0 conda-forge
xorg-libxdmcp 1.1.3 h27ca646_0 conda-forge
xz 5.2.6 h57fd34a_0 conda-forge
yaml 0.2.5 h3422bc3_2 conda-forge
zeromq 4.3.5 hebf3989_1 conda-forge
zfp 1.0.0 h82938aa_4 conda-forge
zipp 3.17.0 pyhd8ed1ab_0 conda-forge
zlib 1.2.13 h53f4e23_5 conda-forge
zlib-ng 2.0.7 h1a8c8d9_0 conda-forge
zstd 1.5.5 h4f39d0f_0 conda-forge
One more point to that: I just tested the plain YOLO model:
model_chart = YOLO(path_to_model)
results = model_chart(path_to_images,
save=True)
Like that, the detection and segmentation of the chart work very well, here is an example image:
However, when I use
results = model_chart(path_to_images,
save=True)
the detection still works (same probability) but the segmentation is off. Here's the result for the same image:
The model was trained on another computer with CUDA.
But this is just a side note to my actual problem: Sometimes, the chart is not detected correctly by your package although the YOLO model detects it quite well. I think it has to do with the specifications of my chart, could you help me with this? Thanks a lot! I put an example to the "Discussions".
Would it be please possible, if you don't mind, as a test to install via pip
in a temporary VirtualEnvironment using the following requirements.txt
file?
accessible-pygments==0.0.4
alabaster==0.7.16
anyio==4.2.0
appnope==0.1.3
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==2.4.1
async-lru==2.0.4
attrs==23.2.0
Babel==2.14.0
beautifulsoup4==4.12.2
biblib-simple==0.1.2
bleach==6.1.0
certifi==2023.11.17
cffi==1.16.0
cfgv==3.4.0
charset-normalizer==3.3.2
click==8.1.7
colour-science==0.4.4
comm==0.2.1
contourpy==1.2.0
coverage==7.4.0
coveralls==1.8.0
cycler==0.12.1
debugpy==1.8.0
decorator==5.1.1
defusedxml==0.7.1
distlib==0.3.8
docopt==0.6.2
docutils==0.20.1
execnet==2.0.2
executing==2.0.1
fastjsonschema==2.19.1
filelock==3.13.1
fonttools==4.47.0
fqdn==1.5.1
fsspec==2023.12.2
hub-sdk==0.0.2
identify==2.5.33
idna==3.6
imageio==2.33.1
imagesize==1.4.1
importlib-metadata==7.0.1
iniconfig==2.0.0
invoke==2.2.0
ipykernel==6.28.0
ipython==8.18.1
ipywidgets==8.1.1
isoduration==20.11.0
jaraco.classes==3.3.0
jedi==0.19.1
Jinja2==3.1.3
json5==0.9.14
jsonpointer==2.4
jsonschema==4.20.0
jsonschema-specifications==2023.12.1
jupyter==1.0.0
jupyter-console==6.6.3
jupyter-events==0.9.0
jupyter-lsp==2.2.1
jupyter_client==8.6.0
jupyter_core==5.7.1
jupyter_server==2.12.3
jupyter_server_terminals==0.5.1
jupyterlab==4.0.10
jupyterlab-widgets==3.0.9
jupyterlab_pygments==0.3.0
jupyterlab_server==2.25.2
keyring==24.3.0
kiwisolver==1.4.5
latexcodec==2.0.1
markdown-it-py==3.0.0
MarkupSafe==2.1.3
matplotlib==3.8.2
matplotlib-inline==0.1.6
mdurl==0.1.2
mistune==3.0.2
more-itertools==10.2.0
mpmath==1.3.0
nbclient==0.9.0
nbconvert==7.14.0
nbformat==5.9.2
nest-asyncio==1.5.8
networkx==3.2.1
nh3==0.2.15
nodeenv==1.8.0
notebook==7.0.6
notebook_shim==0.2.3
numpy==1.26.3
opencv-python==4.9.0.80
overrides==7.4.0
packaging==23.2
pandas==2.1.4
pandocfilters==1.5.0
parso==0.8.3
pexpect==4.9.0
pillow==10.2.0
pkginfo==1.9.6
platformdirs==4.1.0
pluggy==1.3.0
pre-commit==3.6.0
prometheus-client==0.19.0
prompt-toolkit==3.0.43
psutil==5.9.7
ptyprocess==0.7.0
pure-eval==0.2.2
py-cpuinfo==9.0.0
pybtex==0.24.0
pybtex-docutils==1.0.3
pycparser==2.21
pydata-sphinx-theme==0.15.1
Pygments==2.17.2
pyparsing==3.1.1
pyright==1.1.345
pytest==7.4.4
pytest-cov==4.1.0
pytest-xdist==3.5.0
python-dateutil==2.8.2
python-json-logger==2.0.7
pytz==2023.3.post1
PyYAML==6.0.1
pyzmq==25.1.2
qtconsole==5.5.1
QtPy==2.4.1
readme-renderer==42.0
referencing==0.32.1
requests==2.31.0
requests-toolbelt==1.0.0
restructuredtext-lint==1.4.0
rfc3339-validator==0.1.4
rfc3986==2.0.0
rfc3986-validator==0.1.1
rich==13.7.0
rpds-py==0.16.2
scipy==1.11.4
seaborn==0.13.1
Send2Trash==1.8.2
six==1.16.0
sniffio==1.3.0
snowballstemmer==2.2.0
soupsieve==2.5
Sphinx==7.2.6
sphinxcontrib-applehelp==1.0.7
sphinxcontrib-bibtex==2.6.2
sphinxcontrib-devhelp==1.0.5
sphinxcontrib-htmlhelp==2.0.4
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.6
sphinxcontrib-serializinghtml==1.1.9
stack-data==0.6.3
sympy==1.12
terminado==0.18.0
thop==0.1.1.post2209072238
tinycss2==1.2.1
toml==0.10.2
torch==2.1.2
torchvision==0.16.2
tornado==6.4
tqdm==4.66.1
traitlets==5.14.1
twine==4.0.2
types-python-dateutil==2.8.19.20240106
typing_extensions==4.9.0
tzdata==2023.4
ultralytics==8.1.0
uri-template==1.3.0
urllib3==2.1.0
virtualenv==20.25.0
wcwidth==0.2.13
webcolors==1.13
webencodings==0.5.1
websocket-client==1.7.0
widgetsnbextension==4.0.9
zipp==3.17.0
Just so that we have the same stack.
OK, I will do that. Which Python version do you use? And do you run the terminal in Rosetta mode?
Nope, arm64 all the way!
@KelSolaar: OK, and which Python version?
Python 3.11!
I installed everything like you (and additionally colour-checker-detection
).
If I just run the YOLO model using device = "mps",
it works (detection and segmentation) but if I use your code to extract the swatches in your example photos, it doesn't; the position of the swatches is off. I think you can see it in this notebook. Using only the CPU, it works well (at least for your examples, with my images and my model, there are still problems, I think mainly because colour-checker-detection
can't get the orientation right).
Thank you! Would you please happen to have the images too so that I can test exactly on the same data?
Actually, scratch that, I managed to repro, hang on tight.
I have updated the example notebook, two takeaways: mps
device does not work anymore, I'm not sure as to why, then the bit depth should be 8-bit in the inferencer.
The code is now as follows:
def inferencer_agpl(image, **kwargs):
model = YOLO(
os.path.join(
os.path.expanduser("~"),
".colour-science",
"colour-checker-detection",
"colour-checker-detection-l-seg.pt",
),
)
data = []
# NOTE: YOLOv8 expects "BGR" arrays.
if isinstance(image, np.ndarray):
image = image[..., ::-1]
image = colour.io.convert_bit_depth(image, np.uint8.__name__)
# `device=0` for CUDA GPU
for result in model(image):
if result.boxes is None:
continue
if result.masks is None:
continue
data_boxes = result.boxes.data
data_masks = result.masks.data
for i in range(data_boxes.shape[0]):
data.append(
(
data_boxes[i, 4].cpu().numpy(),
data_boxes[i, 5].cpu().numpy(),
data_masks[i].data.cpu().numpy(),
)
)
return data
I read the image directly before using the colour.read_image
definition.
Let me know how it goes!
Thanks! Yes, sorry, I didn't explain, but I used your two example images.
I tried out the updated code. Additionally to the changes you described, the for loop to detect the chart in each image was previously
swatch_colours, swatch_masks, colour_checker_image = (
colour_checker_data.values)
Now it is:
swatch_colours, swatch_masks, colour_checker_image, quadrilateral = (
colour_checker_data.values)
The addition of quadrilateral
leads to the error not enough values to unpack (expected 4, got 3)
.
I think this might have to do with the version of colour-checker-detection
. I use 0.2.0
, you used v0.1.2-235-g0bc0fea
.
I also tried the updated code (without the quadrilateral
part) with my model but it didn't improve the detection, I think because detect_colour_checkers_inference()
doesn't get the orientation right. Do you have a suggestion to adapt SETTINGS_INFERENCE_COLORCHECKER_CLASSIC
? Thanks!
Let's try to get on the same baseline, would it be possible to use the latest develop
branch so that we reduce the number of variables?
I took your notebook and reduced it to something that works for me with the aforementioned fixes: test_mps_detection.ipynb.zip
Let's try to get this one working!