VorlonCD/bi-aidetection

AI Tool crashed with a System Error warning about a buffer overrun

Opened this issue · 6 comments

I found AI Tools was forced closed today with the following error displayed;

"The system detected an overrun of a stack-based buffer in this application. This overrun could potentially allow a malicious user to gain control of this application."

image

I have seen instances of the RAM usage constantly increasing until it locks up the system, though this is rare and can take months. I was waiting for an example of it happening again before posting an issue but this is the first time I have seen this kind of error. I do not know if it is related. The program reopened and is functioning normal so far.

I've never seen the message above. Maybe unrelated, but I would force a Windows Update and reboot if you have not recently done so.

Also I posted a new version 3 weeks ago with a bunch of changes including updated 2 3rd party nuget packages that fixed a few security vulnerabilities that you might want to update:
https://github.com/VorlonCD/bi-aidetection/tree/master/src/UI/Installer

You might find more detail related to the crash if you go to Start > Event Viewer > Application and look for anything related near the time of the error.

But it is a good point about the RAM usage constantly increasing. Mine had been running for about a week and was @ 500 mb ram usage (private bytes). When I restarted it was 147 mb. Let me see if I can figure out why it creeps up.

Still nothing like blueiris @ 5000 mb or firefox at 15000 mb :)

I am sorry for the long delay in getting back! I noticed some slowdown in UI3 so I checked my system and saw this.

image

I see a newer version was posted 2 weeks ago, I'll install that now and I will monitor. It may take a few weeks to see if it happens again as the last time I saw it was when I posted this issue initially.

Good Evening,

I've been running v 2.6.54.8856 for awhile now and it seems the issue of it slowly increasing memory usage until a crash appears to mostly be gone but I am still concerned about the high memory usage.

image

This screenshot is after about 23 days of uptime, I check on it every few weeks (as there are some instances of the queue hitting max and throwing errors due to a large quantity of triggers) and so far I haven't seen it raise much more than this image. But even with AI Tool idle I never see it drop bellow this without closing and reopening it manually. Once I do it'll be ~200MB and slowly raise as it processes images but never dropping.

I'm not saying there isnt still a memory leak somewhere but I did hunt quite a while for it.

I had it going 4 days, just looked and it was 1gb. As soon as I opened the UI and clicked around it dropped to 300mb.

Task manger > Details tab > 'Peek working set' column is 1.3 GB for me.

Interestingly my BI is 5GB while yours is <3GB. Maybe because I do full time recording, not just on movement.

I have only 5 cameras, 2 of them are 4k and my \_Settings\AITOOL.SETTINGS.JSON file is 3 MB. Which can be a factor since its kept in memory also. I also have 3 CodeProject_AI servers (NOT deepstack) and never see my queue higher than 3.

A few thoughts:

  • You are making everything slower and artificially limiting the amount of RAM it can use by running in a VM. I just run BI and Aitool together on a regular machine.
  • Reduce how often images are created in BI. Make sure all camera masking in BI is set up appropriately so you are not getting movement from bushes or things you dont care about sent to Aitool. Make sure all your cameras are not set to create new jpegs for more than 3-4 seconds (if that) when movement happens. Perhaps reduce movement sensitivity.
  • Set up more AI servers. Vm's, Windows Subsystem for Linux (WSL), Raspberry pi's, old hardware, Intel NUC mini pc's or any other computers in the house even if they are not on all the time (check Aitools > Server > "Ignore if offline" for machines that sleep or are only on part of the day). That way the queue will be processed faster. Order the servers according to how fast the hardware is.
  • Reset all Relevant objects in AITOOL - They are a huge part of what is stored in the settings file and you may have bloat from many older releases or a lot of cameras....
  1. In Relevant Objects manager, pick DEFAULT\DEFAULT in the drop down
  2. Reset button
  3. Adjust it how you want your default objects to be for all other cameras
  4. Select ALL other cameras from the drop-down, and reset each so that it imports your 'Default' settings. Make any additional tweaks per camera you need, but hopefully the default settings you configured will handle most cases.

I really appreciate your time and getting back with these details.

I'm not saying there isnt still a memory leak somewhere but I did hunt quite a while for it.

I understand, though it seems to be way better than before. For now I rarely see it using more than 3 or 4GBs. I will see if any of the other advice can help lower this.

I had it going 4 days, just looked and it was 1gb. As soon as I opened the UI and clicked around it dropped to 300mb.

This seems to be about the same for me. It takes a few weeks to get as high as you see in the previous screenshot.

Task manger > Details tab > 'Peek working set' column is 1.3 GB for me.

I did not know of this option, thank you! I enabled it and will keep an eye out.

Interestingly my BI is 5GB while yours is <3GB. Maybe because I do full time recording, not just on movement.

I have only 5 cameras, 2 of them are 4k and my \_Settings\AITOOL.SETTINGS.JSON file is 3 MB. Which can be a factor since its kept in memory also. I also have 3 CodeProject_AI servers (NOT deepstack) and never see my queue higher than 3.

I only have 8 "cameras" active, 4 of which are SD feeds that are recorded at all times, the other four are the HD streams of the same 4 previously but only recorded when AI Tool triggers. And 2 of those are only actively sending images to AI Tool when we are away from home. BI really only needs RAM for pre-trigger video buffer.
My settings file is only 1.37MB.

A few thoughts:

* You are making everything slower and artificially limiting the amount of RAM it can use by running in a VM.   I just run BI and Aitool together on a regular machine.

Marginally slower and isolating stuff like RAM ensures it doesn't run out of control and take down my entire server. In this case a VM is a better choice IMO.

* Reduce how often images are created in BI.   Make sure all camera masking in BI is set up appropriately so you are not getting movement from bushes or things you dont care about sent to Aitool.   Make sure all your cameras are not set to create new jpegs for more than 3-4 seconds (if that) when movement happens.   Perhaps reduce movement sensitivity.

I do have my outside cameras masked off within BI to not trigger without it being in specific areas of interest.
I currently have BI create an image every 1 second when the camera detects motion as I've found higher than this creates a much higher chance of a person walking to be way too late for an appropriate alert or even miss it entirely since the first trigger image doesn't always detect a person.
This, in almost all instances, tends to be perfectly fine as my normal queue time is under 50ms with 2 outside cameras (the ones constantly monitoring) and 2 inside (only when away which also rarely see motion anyway) except for rare instances that can cause a non-stop trigger state such as rain. I would like to tweak these rare non-stop trigger instances but even so I should have a way for the system to recover to handle such situations.

* Set up more AI servers.  Vm's, Windows Subsystem for Linux (WSL), Raspberry pi's, old hardware, Intel NUC mini pc's or any other computers in the house even if they are not on all the time (check Aitools > Server > "Ignore if offline" for machines that sleep or are only on part of the day).   That way the queue will be processed faster.   Order the servers according to how fast the hardware is.

Unfortunately I won't have many options for creating more servers. Originally when I used DeepStack I would run 3 instances but even 2 instances starts to show diminishing returns when using a single GPU. When I switched to CodeProject.AI I never bothered to attempt to run multiple instances as I don't think it'll help any.

* Reset all Relevant objects in AITOOL - They are a huge part of what is stored in the settings file and you may have bloat from many older releases or a lot of cameras....


1. In Relevant Objects manager, pick `DEFAULT\DEFAULT` in the drop down

2. Reset button

3. Adjust it how you want your default objects to be for all other cameras

4. Select ALL other cameras from the drop-down, and reset each so that it imports your 'Default' settings.   Make any additional tweaks per camera you need, but hopefully the default settings you configured will handle most cases.

I only have Person under each CAMERA/Default option but I went ahead and went through these steps, only set Person for Default/Default and went through all others to ensure it was the same.

Originally when I used DeepStack I would run 3 instances but even 2 instances starts to show diminishing returns when using a single GPU. When I switched to CodeProject.AI I never bothered to attempt to run multiple instances as I don't think it'll help any.

I'm working on a version that allows the built in queuing feature of CPAI to work correctly. This will be better than using multiple instances of deepstack. I only just found out about it a few weeks ago. Its kind of like running multiple versions of deepstack but cpai spreads the work out between threads, GPU's and MODULES you have installed. So in my case having 3 modules (yolo.net, yolov5, yolov8 for example) installed vs 1 makes the built in benchmark much faster (CPIA web interface > Codeproject AI Explorer > Benchmark tab.)

When queuing is working correctly, CPIA "MESH" feature should also work correctly (instances on multiple machines can talk to each other if you enable mesh)