Multi-Camera Live Object Detection with Tensorflow Serving

如何使用异步进程获取IP摄像头的视频流

ImageZMQ 可以实现,多个sender(Raspberry Pi)的视频流发送至 reciever(PC)。

例如,可在多个Raspberry Pi运行以下代码,发送视频流:

from imutils.video import VideoStream
import imagezmq
import cv2
import time

cap = VideoStream(0).start()

sender = imagezmq.ImageSender(connect_to='tcp://192.168.11.216:5555')  # change to IP address and port of server thread
cam_id = 'Camera 1'  # this name will be displayed on the corresponding camera stream
time.sleep(2)

id = 0
while True:
    frame = cap.read()
    sender.send_image(cam_id, frame)

    id += 1
    print("frame id: %d" % id)

在PC端运行以下代码,接收视频流:

import cv2
import imagezmq

image_hub = imagezmq.ImageHub()

while True:  # show streamed images until Ctrl-C
    rpi_name, image = image_hub.recv_image()
    cv2.imshow(rpi_name, image)  # 1 window for each RPi

    cv2.waitKey(1)
    image_hub.send_reply(b'OK')

代码特点

  • 通过定义config.ini,读取Tensorflow Serving的URL配置信息
  • flask视频流的实现参考flask-video-streaming, 该部分代码是python类之静态方法、类方法的优秀实现

Record the walkthrough use of tfserving.

  • Tensorflow Serving shows you how to use TensorFlow Serving with Docker(CPU/GPU).
  • serving_basic shows you how to use TensorFlow Serving components to export a trained TensorFlow model(as SavedModel format) and use a Docker serving image to easily load the model for serving.
  • convert_model_to_TFserving

Deploy a Tensorflow model wiht TF Serving

Deploy a Pytorch model wiht TF Serving

使用TensorFlow Serving镜像的两种方式

  • 使用tensorflow/serving基础镜像,将模型文件挂载在对应文件夹
  • 使用基于tensorflow/serving基础镜像的自定义镜像,将模型文件放置在镜像中

Reference

github projects:

Client

Tips

TODO

将coco label从pkl文件读写并保存

client端的代码逻辑是不错的:

  • 提取视频帧有一个守护进程
  • 保存检测的视频帧有一个守护进程
  • 主程序执行预测处理

优化:

  • tf serving需要warm up?
  • 应选择挂载保存在本地的模型的方式?

收获:

  • 在feed.py中,学习使用了锁,线程