Working on implementing Vehicle Detection
pageauc opened this issue · 30 comments
OK
Speed Camera calculates Object speed based on tracking largest moving object greater than minimal pixel area per config.py setting. Tracking logic tries to filter data to verify a good track.
A lot of users use speed camera for vehicle tracking and want better, more reliable speed accuracy. Accuracy can be affected by contours shifting relative to largest moving object as it is being tracked. On vehicles the tracking contours may just be part of the vehicle like a wheel well, fender, window, Etc. depending on lighting and other factors. This problem is more pronounced when moving object fills more of the camera view like camera too close to the road way or large truck, bus, Etc. problem is less severe when moving object takes up a smaller area in the camera image. If the object is a vehicle, object detection can be used to get a better fix on the object contour eg x position.
To implement vehicle Detection speed, all that is needed are saved grayscales variables of the object tracking start and end positions. If vehicle is detected in both grayscales, then get contours of each and calculate pixel distance moved abs(x_start - x_end). This can be done after a successful object track is completed to verify or correct a tracked vehicles speed. This won't affect the real time object tracking loop logic since it can be done after object tracking logic is complete but before results are saved.
Some problems to resolve.
What about parked vehicles? They wont move but will trigger vehicle detection and may cause errors when calculating speed. If parked vehicle is in foreground might be a problem. If in the background behind moving vehicle then less of a problem but could still affect contour position relative to moving vehicle. To resolve this issue I plan to use the object tracking loop start and end positions and match them to vehicle detection start and end positions. The respective object track contour should be within the bounds of the Object Detected contour. Position of each contour set would have to be within and/or close enough otherwise corrected speed calculation would be aborted with relevant logging. Otherwise the original object speed would be updated based on Vehicle Detection with appropriate logging.
Night, low light vehicle detection would most likely fail but object tracking from vehicle lights would still occur.
Note:
I have avoided the issue of multiple object tracking due to RPI processing power. There is some Vehicle detection code that does that but is very slow for real time tracking on RPI. Code below also uses dlib that can be a pain installing on older RPI's with less than 1 GB RAM memory. I used RPI4 with 4GB Memory and still took a while for pip3 install
https://github.com/noorkhokhar99/vehicle-speed-detection-using-opencv-python
Still in early stage of implementing.
Comments, suggestions, Etc are welcome
Claude ....
I am certainly interested in the vehicle detection side of this. Is this something you have a guide on implementing yet?
Tried tests using various vehicle Haar cascades. This was a total disaster. Lots of false positives and not very accurate. Have not had time to investigate AI solutions that are suitable for lower cpu powered Raspberry Pi's
why do you want real time processing? It is not required for this use case. I just have started with speed-cam and made a small adaptation to read from one file(I have no RTSP cam yet, hopefully will get it today) I would like to run speed-cam on my NAS/docker where I also want to store the video files and do offline calculation of the speed with speed-cam and then delete the files if no major speed violation was found. What are your thoughts? often, NAS have a lot of memory/precessing power. Why do you want to stick to low powered PIs? Seems not the right platform for this use case(and maybe AI)
Dear Claude, thanks a lot for your feedback:) I love your project and the dedication and skill you put into it. I understand the history, but maybe with AI & multiple object tracking it is not possible anymore to support RPI. I was reacting to your note:
"I have avoided the issue of multiple object tracking due to RPI processing power. There is some Vehicle detection code that does that, but is very slow for real time tracking on RPI."
I will keep you updated. Yesterday I started to look into the code and did the changes. I'm working on a windows 10 computer. Some things I haven't figured out yet: the GUI doesn't start, and the web server has some code that is not compatible with windows. You are much more experienced than me. I only posted my comment to get your thoughts on offline processing. In my opinion, it is also beneficial for testing. I was using an usb cam yesterday, but it always took a long time(30s? I have to check it again) to connect. Maybe with Linux it is faster? Another option might be google colab. They offer a lot of processing power for free. The video file can be stored on Google Drive and it seems possible to use the opencv GUI. So people only have to buy a camera and transfer the video files to google drive. That might be the best solution for most people. Best Regards from Germany Jörg
here are the few changes I made.
config.py: I wanted to follow your naming convention and introduced a new CAMERA "filecam" and FILECAM_SRC = "video.mp4"
Camera Settings
---------------
CAMERA = "filecam" # valid values usbcam, rtspcam, pilibcam, pilegcam, filecam
CAM_LOCATION = "Front Window"
FILECAM_SRC = "video.mp4"
USBCAM_SRC = 0 # Device number of USB connection usually 0, 1, 2, Etc
I created a new file "strmfilecam.py" based on "strmusbcam.py"
with just one change in "name"
class CamStream:
def init(self,
src=0,
size=(320, 240),
name="FileVideoStream"):
that's it .. I think. For me it also just a hobby. I didn't use python for a while. I'm just annoyed by the traffic and reckless car drivers and looking for a solution to measure the speed
I forgot the changes in strmcam.py
line 14:
CAMLIST = ('usbcam', 'rtspcam', 'pilibcam', 'pilegcam','filecam')
line 23:
try:
from config import (PLUGIN_ENABLE_ON,
PLUGIN_NAME,
CAMERA,
IM_SIZE,
FILECAM_SRC,
RTSPCAM_SRC,
USBCAM_SRC,
IM_FRAMERATE,
IM_ROTATION,
IM_HFLIP,
IM_VFLIP
)
line 170:
elif cam_name == 'usbcam' or cam_name == 'rtspcam' or cam_name == 'filecam':
if cam_name == 'rtspcam':
cam_src = RTSPCAM_SRC
cam_title = cam_name.upper() + ' src=' + cam_src
elif cam_name == 'usbcam':
cam_src = USBCAM_SRC
cam_title = cam_name.upper() + ' src=' + str(cam_src)
elif cam_name == 'filecam':
cam_src = FILECAM_SRC
cam_title = cam_name.upper() + ' src=' + str(cam_src)
UPDATE: I have installed the camera (Reolink E1 Zoom) and the motion detection and automatic FTP video-file transfer to the NAS is working. Now i have a directory structure with a number of small video files for each day. The idea is that the python script is working on the video-files from the day before. All files with no speed violation should be deleted.
Would be interestedly in seeing your script. Is it on Github?
Is it doing vehicle detection
so far this is just what the camera offers(automatic file transfer to an ftp server on the NAS). I will continue to extend your script to do the rest. I guess next week I can upload something to github.
most surveillance cameras are capable of vehicle detection and ftp file transfer, I guess. You have to look at supported protocols in the camera description.
I struggle to get the RTSP motion detection working for my situation: the camera is about 50 m away from the street, so only a small portion of the window is relevant for motion detection. To reduce the size of the stream/file I have blacked out the rest of the window(see examples in the attached link) I have uploaded the config file and screenshots https://github.com/jon-gith/test
Will the motion detection work under these circumstances ? Thanks Jörg
thanks :) I didn't notice the "# Motion Tracking Window Crop Area Settings" in the config file
I find it quite painful to understand and fine-tune the motion detection parameters, and it is still not working good enough. I will try the example you have provided with your initial post.
https://github.com/noorkhokhar99/vehicle-speed-detection-using-opencv-python
how do you like the approach? Is it working well? do you have a new version of speed-cam.py using this approach?
I have tried another haar-cascade example a few days ago, and it worked ok but slow(for faces) there is another example with cars that uses yolo(didn't work for me on google colab) but seems to be fast and best AI algorithm
https://github.com/theAIGuysCode/colab-webcam
I was looking for "yolo pytorch vehicle detection" and found this example of a counting application(GUI included). I will try this one first.
https://github.com/wsh122333/Multi-type_vehicles_flow_statistics
I got the above program working, just a view things had to be changed..but it is not working good for my situation, because it tries to track all the parking cars in the background. Maybe you have an idea to avoid that. The configuration/gui/installation is good but it takes a lot of time and diskspace. not so good for google colab because that has to be repeated every time a new colab session is started.
I will try this lightweight cascade example next
https://github.com/ckyrkou/Car_Sideview_Detection/tree/master
I guess you do you want to use an AI based approach , like haar-cascade ? did you ever use opencv_traincascade on your data?
wow, you are doing advanced stuff. I don't have a PC with NVIDIA GPU and without it more advanced AI e.g. Yolo like in this example https://github.com/wsh122333/Multi-type_vehicles_flow_statistics is down to 1 fps..and not really usable.
I will continue with your solution. It seems best suited for normal PCs/NAS
I'm new to motion detection and object tracking.. I hope you don't mind when I ask a few questions about that subject: it seems that you didn't use any of the opencv built in object tracking algorithms(e.g. MOOSE) . Why ?
Another question, wouldn't it be better to restrict motion detection to 2 smaller areas left and right? That might increase performance and unnecessary tracking of vehicle leaving parking area or coming from a side road
Noting a number of people who are attempting detection on the rpi and are having issues with performance. One solution is to use the Google Coral TPU accelerator. I've found it works well and speeds up detection. Also, a possible solution for inaccuracy is to implement and understand the use of "centroid" tracking. Adrian Rosebrock has a good example on the pyimagesearch website. His solution, though, uses the Intel NCS 2 which is an expensive solution and isn't going to be supported for long. Converting Adrian's solution to use the google coral tpu accelerator isn't difficult and is a cheaper solution; besides you don't have to install OpenVino which can be a pain.
I'm new to motion detection and object tracking.. I hope you don't mind when I ask a few questions about that subject: it seems that you didn't use any of the opencv built in object tracking algorithms(e.g. MOOSE) . Why ?
I found the reason..it is not usable:
opencv/opencv_contrib#2377