Crashing Deepstack Processing?
mynameisdaniel32 opened this issue · 14 comments
A bit of a weird one, I've broken my previously working setup, and haven't been able to get it working again.
I've gone from 200ms processing, to pretty much timing out, Docker console seems to indicate things are taking between 5 and 15 minutes to process (but not sure it's actually doing anything at that point). I am also getting the same issue using the AI tool talking to Deepstack separately (see below).
The biggest thing I did was change the snapshot path to instead be a mounted share from my NAS, that side of things seems to work for everything else (can access via HA media browser and camera entities), but suspect it's causing issues with this integration.
I did also update to the latest version of this integration and the 2021.1.0b0 HA version recently, but I 'think' it was working briefly after that, although not 100%.
So I tried reverting things to use the old home assistant file locations under config, didn't help, deleted the Deepstack Docker container and recreated, no change. But what I did notice, is if I delete the Docker container and recreate it, and use the AI tool before HA tries, I'm back to 200ms processing and that works continuously, UNTIL I run a request from HA, in which both AI Tool and HA aren't able to have anything processed until I restart the Docker container.
That's where I'm up to, any ideas?
Well it does sound like something odd is going on either accessing the images or with the actual images themselves. Please put logger in debug mode and try find some relevant error messages
I had a go at getting logs with debug, but all I'm getting is:
2021-01-01 22:47:16 WARNING (MainThread) [homeassistant.helpers.entity] Update of image_processing.deepstack_object_front_door is taking over 10 seconds
2021-01-01 22:47:17 ERROR (SyncWorker_5) [custom_components.deepstack_object.image_processing] Deepstack error : Timeout connecting to Deepstack, the current timeout is 10 seconds, try increasing this value
Should I be looking somewhere else? My HA config is:
logger:
default: warn
logs:
custom_components.deepstack_object.image_processing: debug
And in Docker I get this, with the first one being from AI Tool with a test image, the second an attempt via HA using a camera.
---------------------------------------
---------------------------------------
�
v1/backup
---------------------------------------
v1/restore
[GIN] 2021/01/01 - 19:16:27 | 200 | 264.6204ms | 172.17.0.1 | POST /v1/vision/detection
[GIN] 2021/01/01 - 22:51:10 | 200 | 4m5s | 172.17.0.1 | POST /v1/vision/detection
It is a timeout error, increase to 30 seconds, but if that doesn't solve it I am out of ideas and we should take this issue to the Deepstack repo
Just tried 30 seconds, same thing, looks like it might work if I set it to 10 minutes.
Thanks, I'll have a look at raising it over there tomorrow, do you know if there's any debug logging I can look for on the Deepstack side of things?
Playing around with this a little further, I have tried pointing different cameras at Deepstack and it's still taking 4-10 minutes to process them. I even tried setting up Deepstack on another device and am getting the same problem.
Are the images cached from the Home Assistant side of things? Just wondering if it's trying to send a corrupt image file or something like that which I can clear.
We need to determine if this is related to the HA integration or Deepstack. Can you pull some images from your camera and see if you can reproduce the issue just directly posting the images using curl?
Just tried that using some images downloaded using the camera.snapshot service (from the same camera). They process fine (250ms).
Setting that same image as a Local File camera in Home Assistant and running the image processing on it is back to 5+ minutes.
My guess is it's a periodically corrupted image. Saving using snapshot might be transforming it. Another guess is there is something funny about the way you are triggering the scan. Can you share config and the image?
Is there some way to save/see the image that is sent via this integration? I've attached the image I was using above which was saved via the snapshot service.
As for my config, it looks like this, the current file location was the working one previously, the other is my remote folder. I have also tried with save_timestamped_file: false
and no improvement.
- platform: deepstack_object
ip_address: 10.1.0.135
port: 5000
save_file_folder: /home/homeassistant/.homeassistant/snapshots
#save_file_folder: /media/server-nvr/snapshots
save_timestamped_file: true
source:
- entity_id: camera.front_door
targets:
- person
#- car
#- truck
#- motorcycle
#- bicycle
#- cat
confidence: 0
I'm using the image_processing.scan service via developer tools, but was previously using this:
- alias: Image Process - Front Door
trigger:
- platform: state
entity_id: binary_sensor.motion_front_door_camera
to: 'on'
mode: single
action:
- repeat:
sequence:
- service: image_processing.scan
data:
entity_id: image_processing.deepstack_object_front_door
- delay:
milliseconds: 500
until:
- condition: state
entity_id: binary_sensor.motion_front_door_camera
state: 'off'
We have an issue on Deepstack repo to create a logger so we can see all images and responses etc, but it is not started yet. I might begin with this soon.
It looks like a good quality, high resolution image. Another guess is that the large size of the image might be overloading Deepstack somehow. I think you can use the proxy camera to downsize it, so that is worth a try. Also can you try posting the full high res image via curl?
Yeah that would definitely be helpful!
I've just been having a play from a different angle, running Wireshark on the machine running DeepStack. I've been able to capture the packets sent from Home Assistant and the AI Tool (working and uses the same API?). Interestingly, the jpg data coming from this integration is smaller in size already, but it doesn't appear to be corrupt. If I save the raw data into a jpeg and then run it via AI Tool it works fine (220ms). Below is the image file from the HA Integration.
Comparing the HTTP packets below, left = working from AI Tool, right = times out from HA:
A few differences throughout, I notice the HA one (right) doesn't have a file extension type in the file name?
I just noticed you set confidence 0, and remember there is a Deepstack bug for that value. Please increase it and try again
Thank you! Just changed from 0 to 1 and it's working!
Weird... I hadn't changed that part of the config in ages and was using 0 in AI Tool too.
The Deepstack-python dependency changed its bahaviour, as previously only confidences above 0.45 were allowed, all though this was not clear