For working with GPU
grdsvt opened this issue · 8 comments
Hi, I would use with GPU but I have problem with Jetson Nano.
Is it necessary setting something to work with GPU and tensorflow?
I am using the trained model.
Thansk
Regards
I am not familiar with Jetson Nano and just have used Raspberry Pi. you may need to verify your gpu environment with Jetson Nano Board first, install tensorflow-gpu..., if you can run tensorflow on Jetson Nano well, I think it should work on it.
So, you used Raspberry? I suppose Raspberry 4.0
I am using python 3.6.9 with tensorflow 2.1.0. Correct?
Yes. Raspberry pi4(cpu) with tensorflow object detection API.
Hi, I am testing your code with a jetson nano but I have very low fps.......0,5fps.
What is your average fps on Raspberry v.4?
Thnaks.
Hi,
I have no good results on jetson nano, so i would try with raspberry.
Please, can you help me?
Which is your fps on raspberry v.4?
Did you change something?
Thanks
You should optimize the model with TensorRT and run inferencing on the Jetson Nano DevKit. I did not run this repo model on the Raspberry Pi 4, but tested mobilenetv2-ssdlite, which can reach 3 fps, and the CPU usage is 50%.This repo model is smaller than this, and I guess there is a better frame rate.But I think jeston nano with tensorrt is better.If you want to improve the frame rate,Considering model quantification.
Hi,
can we evaluate this work together.
I am searching partner to optize the code, can you help me?
Can you send me mail?
Thanks
Sorry, I don’t have much spare time because of the job is heavily.Just try my best.
MyEmail: xautzxc@gmail.com
Because you already have jeston nano, there is no need to use Raspberry Pi in my opinion, So I said ‘I think jeston nano with tensorrt is better’.
To be honest, I prefer to quantize the model and then choose the Cortex-A series or RISC-V chip to implement my application.