This project combines YOLOv2(reference) and seq-nms(reference) to realise real time video detection.
- Clone the github repository in the folder you would like to have the project:
git clone https://github.com/Ivanioel/seq_nms_yolo_v2
- Go inside the project:
cd seq_nms_yolo_v2
-
Make your proyect using the command
make
.- Take into consideration that there are some flags you should change depending on your environment.
- GPU=1 # 0 if your pc doesn't support CUDA - If you are going to execute in GPU be sure that the variables
COMMON
andLDFLAGS
are pointing to your cuda installation folder. In this case they are pointing to/usr/local/cuda-10.1/
- CUDNN=0 # 1 if your pc does support CUDNN
- OPENCV=0 # 1 if your pc does support OPENCV
-
Download
yolo.weights
andtiny-yolo.weights
by running:
wget https://pjreddie.com/media/files/yolo.weights
wget https://pjreddie.com/media/files/yolov2-tiny.weights
- Create your conda environment and install all the packages required (you have to activate conda first with
conda init
. For more info https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html):
conda create -y -p ./env python=2.7
conda activate ./env
pip install --upgrade tensorflow tf_object_detection
conda install -y opencv matplotlib pillow scipy
- Change the path of
PKG_CONFIG_PATH
(an environment variable that specifies additional paths in whichpkg-config
will search for its .pc files):
PKG_CONFIG_PATH=$PKG_CONFIG_PATH:$(pwd)/env/lib/pkgconfig
export PKG_CONFIG_PATH
- Copy the libdarknet object and link file in your conda environment:
cp libdarknet.so ./env/lib
cp libdarknet.a ./env/lib
-
Move to the video folder by
cd video
. -
Copy a video file to the video folder, for example,
input.mp4
-
In the video folder run:
python video2img.py -i input.mp4
python get_pkllist.py
-
Return to project root floder (
cd ..
) and runpython yolo_seqnms.py
to generate output images in thevideo/output
folder. -
If you want to reconstruct a video from these output images, you can go to the video folder and run
python img2video.py -i output
And you will see detection results in video/output
- (Extra) If you want to try with different videos and with seq-nms and only nms, execute
execute_all.sh
in the video folder (giving it execution permission withchmod +x execute_all.sh
) and all the sample videos in the video folder will be used.
This project copies lots of code from darknet , Seq-NMS and models.