-
OpenCV
-
Tensorflow (version below 1.9)
-
Module cdc-acm.ko
-
Cloned models repository of tensorflow
git clone https://github.com/tensorflow/models.git
for my commit version used(Jan 2019)
cd models
git checkout 21a4ad75c845ffaf9602318ab9c837977d5a9852
sudo apt install python3-pip python3-dev
pip3 install --user Cython
pip3 install --user contextlib2
pip3 install --user matplotlib
pip3 install --user pillow
pip3 install --user lxml //if this command is throwing error then run > opkg install libxml2-dev libxslt-dev
pip3 install imutils
sudo apt-get install protobuf-compiler
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
cd models/research/object_detection
copy and replace contents of this repository here
python3 combined.py (check if usb button is on and camera is connected)
Click on "Register".
Customer 1: put_name
Item: Dove,Pears,Medimix (No spaces, and complete names without spelling errors)
Click Done
Click Exit
Now click "Start"
wait for some seconds
The objects will be sorted irrespective of sequence.
After one complete sorting, press EXIT and start again for next, as it will throw error.(to be fixed)
Make sure the timing between each consecutive object is more than 2-3 sec, after completetion of arm operation
python3 Object_detection_webcam.py
Tensorflow object detection API workflow:
Credits: https://github.com/datitran/raccoon_dataset
Overall Summary:
- Create dataset, ~800*600 preferred
- Label using labelimg github that outputs xml file
- Convert xml to csv using script
- Convert csv to TF record file using script generateTFrecord.py
- Create pbtxt file inside training folder and mention all classes in dataset in given format
- Download selected model configuration(ssd-mobilenet.conf) file and edit the parameters such as various paths, number of classes, augmentations,learning rates etc
- Copy all these files into object_detection folder in API
- Run train.py with input model, trainnig directory, dataset and configuration pipeline file with proper arguments
- Normally should train till 10,000 steps, or till loss < 1 or between 1-2, see progress model training on tensorboard usnig "events.." file inside the training folder
- Convert the obtained ckpt file into frozen graph using export_inference_graph.py and giving required arguments
For details and lessons learned, refer this PPT
Important commands used:
To install COCO api needed for training
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
make
cp -r pycocotools ~/object_models/models-master/research/
To start training:
python object_detection/legacy/train.py --logtostderr --train_dir=object_detection/training/ --pipeline_config_path=object_detection/training/ssdlite.config
After finishing training
python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/ssdlite.config --trained_checkpoint_prefix training/model.ckpt-29540 --output_directory allfour_inference_graph
To visualise during training
tensorboard --logdir='training'