pageauc/speed-camera

howto improve the boudingbox around the vehicle ?

sebsoft opened this issue · 9 comments

Hi Claude,
Love your work. I'm trying to use speed-camera to estimate the size of the passing vehicles into different classes like, car, van and truck.
The picture below shows a vehicle passing the road in front of my house. I have a perfect 90 degrees view. It seems that the height of the bounding box is most of the times correct but the length is usually incorrect. I could estimate the vehicle class using the heigth only but are there any settings which will get the box around the complete vehicle ?

image

image

Hi Claude,

Thank you for elaborate answer. Post processing the captured images would indeed be a elegant solution. Let see how far i get with that.

Sebastiaan

Hi Claude

I did a post processing test on the desktop with yolo which has a pretrained model with several object classes which includes cars, trucks bikes etc. First results look pretty good. The red box on the picture with id=2 is the result of the object detection.
Have to check if it runs on the rpi as well.

snippet

   net = cv2.dnn.readNet("models/yolov4.weights", "models/yolov4.cfg")
    self.model = cv2.dnn_DetectionModel(net)
    self.model.setInputParams(size=(832, 832), scale=1 / 255)
    # Allow classes containing Vehicles only
    self.classes_allowed = [0,1, 2, 3, 5, 6, 7, 8, 9]
    #person
    #bicycle
    #car
    #motorbike
    #aeroplane
    #bus
    #train
    #truck
    #boat

image

If I read the code correctly, it saves the last image with its found rectangle. @sebsoft to improve the rectangle, you could keep track of all images for a certain vehicle detection event and save the one with the biggest rectangle.

See below the result of another approach. The first and last image are saved to disk and together with the timespan between them passed to the code which does the object detection. Below the last image with the object detection box from the first image added. The blue line connects the centers of the the boxes. Using the pixel / m values the length of the object is calculated and from the length of the blue line the speed of the object.

Stop_20240322-1546098_car_4 9_41_

@Mobilitysensing Thats very nice! Can you share the code changes you did for this? This will help me tweak the parameters a bit better. I still see cars passing by at unrealistic speeds in my setup.

See below the result of another approach. The first and last image are saved to disk and together with the timespan between them passed to the code which does the object detection. Below the last image with the object detection box from the first image added. The blue line connects the centers of the the boxes. Using the pixel / m values the length of the object is calculated and from the length of the blue line the speed of the object.

Stop_20240322-1546098_car_4 9_41_

Wow, that's clever - I like that! Well done.

@Mobilitysensing Thats very nice! Can you share the code changes you did for this? This will help me tweak the parameters a bit better. I still see cars passing by at unrealistic speeds in my setup.

You can find the complete code here. ( It needs some polishing as it is much based on trial and error)
It's based on motrack by @pageauc which only contains the core of speed-camera. There are a few hard coded numbers which need to be changed to get results with a different camera setup. I was only interested in cars on the road passing from R to L so these are filtered out in motrack.py . Only cars passing R to L are passed on to user_code_speed.py where the object detection happens. As the object detection will detect all objects in the image there are some constants to determine weter the object is on the road.

It works pretty well but depends on lightning conditions. When cars cast shades on the road they can sometimes be classified as other object (boat, plane etc) and the boundingbox includes the shade.

https://github.com/sebsoft/motrack