/digital_borders_animal_reid

Final Year Project

Primary LanguageJupyter Notebook

Digital Borders - Animal Re-Identification

Setup

Required python-version: 3.6+ Install all needed requirements for installation: sudo apt install python-opencv

Setting up YOLO

Follow all instructions to install YOLO from here.
Compiling the source code is optional. However, the network configuration and weights are needed.

Running with MobileNetV2-SSD

Download the model file from here (source) and save them under the ssd folder. You can then run the demo/object_detection.py and the notebooks/ObjectDetection.ipynb files using the saved model.

Training Object Detection Model

See details for Object Detection and Re-Id training for reid-strong-baseline under the datasets folder

Training -- Feature Extractors using Truncated DCNNs

In order to test the feature extraction and clustering using SVM, the following are required:

  • MongoDB - Install using sudo apt install mongod
  • pymongo - Install using python -m pip install pymongo
  • Model Weights (Pre-Trained over ImageNet)
    • AlexNet - Download from here
    • GoogLeNet - Download from here
    • ResNet50 - Download the .caffemodel for ResNet50 from here
  • Save all the models with the correct names under the models folder as follows:
    • AlexNet - alexnet.caffemodel
    • GoogLeNet - googlenet.caffemodel
    • ResNet50 - resnet50.caffemodel

* Ensure that MongoDB is up, and is listening on `localhost:27017`

Setting up the data

Details on how to acquire various datasets are provided under the datasets folder.
Store all the images into an appropriate folder. The folder needs to have a file called class_mapping.txt, which contains the image file name and the class label as follows:
Image File Prefix Individual Id
The file for ELPephants contains this information as requried, but the AMUR dataset does not.
Generate the file for AMUR as needed, and put int under the data folder.
A sample image dataset is provided under sample_images/amur_small

Running the code

The Jupyter-Notebook file is added for quick tests. Otherwise, the python program can be run as:
python train/truncated_dcnns/find_best_svm_model.py models_bin/ssd/saved_model/ sample_images/amur_small/
Replace amur_small with the path to the correct image dataset(s) as needed
This could take several hours to complete.
By default, the program will evaluate all the datasets provided, under all three models specified, and save the output of each layer of the network into the DB
The best models are saved to the svm_models_trained folder.
The demo for one of the completed models can be performed using the svm_identifier.py script. For example:
python demo/svm_identifier.py svm_models_trained/<model> models_bin/ssd/saved_model <image folder>

Partitioning images for Train and Test during Re-Identification training

Since open-set identification is being tested, it is required to remove some of the known ids and train the SVM using the reamining ids.
(For AMUR database only) First, using the reid_list_train.csv file, only those images for which the id is known is retained inside the train folder
Of those images retained in the train folder, randomly select 10 ids, and move every image labelled for these 10 ids into a new folder called test. The list of the ids that have been moved are stored under test/reid_list_test.csv
Since a subset of the database is again needed to test the effectiveness of the SVM classifier, for every id that has not been removed (i.e. ids that are in reid_list_train but not in reid_list_test), remove 25% of the images and move them under the test folder.

The SVM can then be trained using the train subset, and their generalization accuracy is measured over the test data.


For the ELP dataset, the train and validation splits are already provided under train.txt and val.txt. These are directly used to partition the dataset.
To perform this split, run the utils/create_open_reid_splits.py script on the desired folder.
More details on how to format the data suitable for training with reid-strong-baseline can be found under the datasets folder.

Identification Pipeline with Triplet loss

Only a very minimal (and slightly out-of-date) information of training and evaluation is provided here.
For a more thorough description of the datasets, setting up and training procedure, see here.

Training Using Re-id Strong Baseline

Checkout the training code from here and follow the instructions to complete training.

Prepare the dataset

Multiple scripts are needed to be run before the dataset can be used for reid-strong baseline. All required scripts are listed below. Run them in the same order. The sample assumes images are stored in a folder named ELPephants\reid_faces. The metdatafile class_mapping.txt which contains each file to class id mapping is required to be present in the same folder

python3 create_open_reid_splits.py ELPephants/reid_faces ELPephants/faces_open_reid # Creates two folders, train and test inside `faces_open_reid`
python3 rename_images_to_int_names.py ELPephants/faces_open_reid/train # Renames all files to integer names as needed by Open Re-id
python3 remap_labels_contiguous.py ELPephants/faces_open_reid/train # If there are any missing identities, replace them with continuous ids
# Move to Open Reid code base, and start the run and stop it once the datasets are created. It creates the images, splits.json and meta.json files
# ...
python3 partition_ds_for_open_reid.py ../open_reid/amur_data/elp ../reid-strong-baseline/data/elp # Optional split number between [0,10] can also be specified

Training

From the reid-strong-baseline folder, run the appropriate training file by specifying the requried config file
For example: train.bat softmax_triplet_with_center_elp.yml
Configurations are included in the datasets/reid/configs folder.

Testing

Once the training is completed, test the performance over the test data
Set the PYTHONPATH to the reid-strong-baseline folder through: export PYTHONPATH=$PYTHONPATH:<path to reid strong baseline folder>
python3 test/test_triplet_loss.py ../reid-strong-baseline/elp_test/resnet50_model_100.pth ELPephants/faces_open_reid/test
Additionally, Closed and Open set accuracy can be tested using test/calc_closed_set_acc.py and test/calc_open_set_acc.py scripts

Object Tracking Demo

Once the model has been trained using reid-strong-baseline, it first needs to be converted to Keras. This can be done using python utils/convert_torch_to_keras.py <torch model> <output keras model name>. The converted model can then be provided to the Object Tracker for re-identification

A demo can be run over a video too, by running python demo/video_demo.py <Object Detection Model> <Re-ID model (keras)> <video path>.

Live Demo

On both the laptop and the raspberry pi, run source setenv.sh from the base folder of the repo to setup required PYTHONPATHs

Tracking

Run the subscriber program on the laptop using python3 central_server/start_server.py <Re-Id model> to start the listening service for the tracking

Frontend server

Once the MQTT broker service is up and running, as is the central server code, the frontend can be started using Angular CLI. For installation instructions, refer to the README file under frontend/monitoring_server.
Once Node JS and Angular (version 9) are installed, set the IP address for the MQTT broker in the app.module.ts file. Start the frontend through:

cd frontend/monitoring_server
ng serve --host 0.0.0.0

Raspberry Pi

The official TFLite provided by Google is rather slow. Install the alternative compiled version provided by PINTO as mentioned in the corresponding readme under the raspberry_pi folder.
Install MQTT using python3 -m pip install mqtt
On a laptop / desktop, setup the MQTT broker service using sudo apt install mosquitto
Provide values for the MQTT server in the Raspberry Pi code files, and run the program
python raspberry_pi/run_object_detector.py <Object detection model (tflite)> <MQTT broker IP> [display - optional]