Jetson Nano 2GB - Deepstack Face not responding after few API request
Opened this issue · 16 comments
Dear DeepStack Team,
Thanks for great software, i have bought a Nvidia Jetson Nano 2GB to test it but i have some problem with Face Detection.
At the moment i have used the following docker image and enviroment:
docker run -d --runtime nvidia --gpus all -e VISION-FACE=True -e MODE=High -p 80:5000 deepquestai/deepstack:jetpack-2020.12
The problem comes after two face detection runs, the Jetson seems going on SWAP mode and the system is not responsible at all, i have wait 30 minutes to see if something was elaborated but nothing.
As you suggest in most of the thread i have look on the log but is blank, no errors.
I have tested the latest docker images (jetpack-2020.12,jetpack,jetpack-x3-beta) but without so much success.
Have you some idea? 2GB of RAM is not enough? What does it mean “MODE=Low/Medium/High”? From the doc is not so clear :slight_smile: “High” means High CPU/RAM power usage (Better inference)?
I have made some more test using an USB3 SSD drive and remove the ZSWAP, the system is going better and the docker deepstack is working much better but after some more API request the system is going in stale with KSWAPD near 100% of CPU. The problem i think is related to the 2GB of Ram.....
Thanks so much for the answers.
I have catch the error log before the crash of my Jetson Nano:
kernel_panic_jetston_nano_2gb.log
I have also found that on nvidia forum, maybe the problem is related to the RAM used for GPU...
https://forums.developer.nvidia.com/t/general-question-about-jetsons-gpu-cpu-shared-memory-usage/61091/6
Other good link https://forums.developer.nvidia.com/t/script-killed/76716/6
I believe MODE=High
is not supported on Jetson, remove this
@robmarkcole the option is working because with Low i have worst confidence detection compared to the Mode=High
I also have problems with the Jetson 2GB for face detection. When running with the docker container with
environment variable VISION-FACE=True, the service is started, but the device becomes inresponsive. The object detection doesn’t have any problems
Hello @dimitribellini @jodur , as you pointed out, this is a limitation on the memory. Generally, 4GB RAM is needed at minimum to run DeepStack. For the Jetson nano, Object Detection and scene recognition should work fine at low mode. The FACE endpoint uses more memory.
Hi @johnolafenwa, thanks so much for your reply.
So the Nvidia Jetson Nano 2GB is not support for Face recognition? There is no way to fix it (lower accuracy)?
Thanks
I just stumbled across this - perhaps it should be mentioned in the Jetson Nano documentation....
Object recognition works fine, but enabling face as well runs out of memory... This is even on the 4Gb nano...
I have a 4 gig nano and it had been working great with Double Take, Frigate, and Home Assistant. All on separate boxes. But lately, i cannot get to the jetson by ip address at all. If I try and connect by its netbios name, it takes a min or two but finally connects. It looks like the container is running and everything looks fine. But I cannot send any data to the API end point or do anything with it. Thaats even if I create a brand new container and run it with default settings
Any update on this? My Jetson Nano is just sitting there for now. I went ahead and just installed the Home Assistant DeepStack Addon. but it cant make use of any of the decoding abilities. It seems like it still processed requests but comes up with a Connection Error in Double Take now when the day before it didn't and nothing that I know of had changed. This is more than likely an issue with Double Take, but the combo of DeepStack, Frigate, and Double Take all integrated into Home Assistant is so very badass!
Might I mention that Jetpack on Jetson Nano is a dead end: https://developer.nvidia.com/embedded/develop/roadmap and https://forums.developer.nvidia.com/t/jetson-software-roadmap-for-2h-2021-and-2022/177724
And be it deepstack face detection or deepstack with custom models like the combined model from MikeLud/DeepStack-Security-Camera-Models, even the 4GB version quickly runs out of memory and becomes unresponsive.
Time for another HW accelerator. Coral would be ideal (just hoping...)
Sorry, I don't run Frigate: am using BlueIris for now, so cannot comment on that.
About the RAM disk: a RAM disk is only possible if you have extra RAM, which you don't have on the Nano.
If you want to make more memory available to your system, you will need a fast swap partition or swap file on a flash drive. Adding RAM is not possible.
For swap, you have 3 options:
- the default one: the SD card. Please don't do that, as that is a recipe for destroying your SD card in no time and it will be unusably slow.
- The M.2 Key E connector. It has 1 PCIe lane (plus slower stuff like USB 2.0, UART, I2S, and I2C.) You might try it, but finding compatible storage hardware will very be hard. It is mostly for WiFi or BT cards. SATA controllers via E key are rare but possible, but I doubt they fit in the available space, you probably would have a driver issue, and it will likely be slower than USB 3.0. So:
- a USB 3.0 SSD. You will need one with a good TBW rating. Is doable, but still be will be slow.
In essence: probably a waste of time.
Suggestions... Well, depends.
Can you add a GPU to your ESXi? With PCI Passthrough, you could make deepstack use it. Uses a lot more power though.
Is your internet link reliable enough to make your face detection depend on its availability? Then run your models in the cloud. Many options there, some for a low fee. Don't know about AWS, as that is strictly out of bounds in the field I work in, but GCP has some nice options.
About WIndows: same considerations as you: I prefer Linux. The only Win machine at home is for BI, but I managed to find an old Win7 license that I upgraded to Win10, so I'm fine. You might also be able to find one.
Although adding RAM to the Jetson Nano module is not possible (it is on the module, not on the carrier board), you do have the option of choosing a better carrier board, like the Auvidia JN-30PD or JN30-PSE, that has full nvme possibilities. Hence allowing faster flash storage. But I cannot comment more on its capabilities, as with the Jetson Nano software support nearing EOL, I have abandoned all research related to that device.
A warning about Coral: the most flexible is indeed still the USB version, that unfortunately is hard to get working under VMWare (be it ESXi or Fusion), so needs either proxmox, a dedicated host (ex.: raspi with > 3A PSU), or a dedicated PCI passed through USB controller on ESXi.
Deepstack + Coral is by far my preferred solution, but not possible (yet? ever?).