This project tracks people using the YOLO and ByteTrack algorithms, counting individuals passing a marker and categorizing their direction as right-to-left or left-to-right.
Our crossing line is between the pole and the noticeboard bottom.
video_2_demo.mp4
This project utilizes the YOLO (You Only Look Once) object detection algorithm combined with the ByteTrack multi-object tracking algorithm to monitor and count people passing a specified marker. The direction of movement (right-to-left or left-to-right) is recorded, and counters are incremented accordingly.
-
Read Video:
- The code starts by loading a video or connecting to a live camera feed. This is like pressing play on a video player.
-
Finding People (YOLO):
- The code uses a deep learning model called YOLO (You Only Look Once) to find people in each frame of the video. Think of YOLO as a really smart pair of glasses that can spot people instantly in any picture.
-
Tracking People (ByteTrack):
- Once YOLO spots a person, ByteTrack takes over to follow that person as they move from one frame to the next. ByteTrack is like a high-tech tracking algorithm that keeps an eye on each person so it knows where they go.
-
Counting People:
- The code has a "marker" or an imaginary line in the video (that it gets from the
mask.png
file). Whenever a person crosses this line, the code notes down which direction they are moving:- If they cross from right to left, one counter goes up.
- If they cross from left to right, another counter goes up.
- The code has a "marker" or an imaginary line in the video (that it gets from the
-
Displaying Results:
- The code continuously updates and shows the counts for how many people have crossed the line in each direction. This is like a scoreboard that keeps track of the movement of people in real time.
-
Clone the repository:
git clone https://github.com/rushidarge/People-Tracking-and-Counting.git cd People-Tracking-and-Counting
-
Create and activate a virtual environment (optional but recommended):
python3 -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install the required dependencies:
pip install -r requirements.txt
- Prepare your input video or camera feed.
- Update your path for video, mask, and out in config.json
- Run the main tracking script:
python get_count_dynamic.py
- View the results and counters in the console output or as specified in the configuration.
- Your video is saved in videos/predicition_output/
- Person detection using YOLO
- Multi-object tracking with ByteTrack
- Direction-based counting
- Easy configuration and customization
- We need to tune our logic sometimes we miss a person from counting.
- If people are overlapping we lose track of them.
- Two people walking simultaneously then we miss that person in counting, we need to place the camera strategically.
- To make it real-time we need GPU.
Yolo Model : https://github.com/WongKinYiu/yolov9
Bytetrack Algorithm: https://medium.com/tech-blogs-by-nest-digital/object-tracking-object-detection-tracking-using-bytetrack-0aafe924d292
Ultralytics: https://docs.ultralytics.com/modes/track/