Extended support for Raspberry Pi 3
baqwas opened this issue · 4 comments
Hello!
My Raspberry Pi has the following characteristics:
processor : 0
processor : 1
processor : 2
processor : 3
model name : **ARMv7** Processor rev 4 (v7l)
BogoMIPS : 38.40
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 4
Hardware : BCM2835
Revision : a020d3
Serial : 0000000000000000
Model : Raspberry Pi 3 Model B Plus Rev 1.3
$ uname -a
Linux raspbari14 4.19.97-v7+ #1294 SMP Thu Jan 30 13:15:58 GMT 2020 **armv7l** GNU/Linux
I'm not sure where your image is intended to support this older model. I do have TensorFlow installed on this computer. Could I run some scripts to build the image myself? :)
Also, I noticed that the version of python being used supposedly v2. Is there any support for v3?
These are not issues, per se, but clearly roadblocks in practicing your tutorials. Any guidance (other than purchasing RPi4) would be sincerely appreciated.
Kind regards.
Hi @baqwas , thanks for trying out. At this moment, we have not tested on Raspberry Pi 3, but the current image in theory should also work on Raspberry Pi 3. Please let us know whether it works :)
And we do use Python 3, not Python 2. Where did you get the hint that we use Python 2? Just to confirm that we do not misuse the Python version under a certain circumstance.
Hello @xuhdev, please attribute my comments as operator inexperience. All the steps leading to deploying the docker image (on RPi3) worked. There was minor suggestion regarding a deprecated Numpy function but it was not a roadblock. Also, I read too much into the Python version. Please disregard it; I use only Python3 and your solution worked flawlessly.
The Swagger page was displayed in due course. Of course, I'm invoking the headless RPi3 (for tutorial purposes now) from an Ubuntu desktop with Firefox. I believe that it would be better for me to work locally on the RPi3 to view the output annotated images (unless you have an alternate suggestion). In other words more self-study is needed on my part. Your article was easily to follow and helped me (in parallel) to deploy Docker too (something that I have resisted far too long).
Xie xie,
Kind regards.
Confirming that the solution worked on Raspberry Pi 3B+ using the test case provided in the article by @xuhdev:
pi@raspbari14:~/projects/MAX/MAX-Object-Detector $ curl -F "image=@samples/dog-human.jpg" -XPOST http://127.0.0.1:5000/model/predict
{"status": "ok", "predictions": [{"label_id": "1", "label": "person", "probability": 0.9466203451156616, "detection_box": [0.12352362275123596, 0.12477394938468933, 0.8426759243011475, 0.5944521427154541]}, {"label_id": "18", "label": "dog", "probability": 0.8503445386886597, "detection_box": [0.1041632890701294, 0.18178102374076843, 0.843441903591156, 0.7260116338729858]}]}
pi@raspbari14:~/projects/MAX/MAX-Object-Detector $
My next foray will be to use the solution with a graphical user interface. Once again many, many thanks for providing a solution that is truly portable - seems simple but in reality requires a lot of subtle design/development explicitly (as well as implicitly through experience) decisions.
Kind regards.
Glad it worked -- my pleasure!