curl https://raw.githubusercontent.com/c17hawke/FSDS-DVC-NLP-Project-with-docs/main/.gitignore > .gitignore
curl https://raw.githubusercontent.com/c17hawke/general_template/main/init_setup.sh > init_setup.sh
bash init_setup.sh
python -c "import tensorflow as tf;print(tf.config.list_physical_devices('GPU'))"
mkdir TensorFlow && cd TensorFlow
git clone https://github.com/tensorflow/models.git
remove .git dir of models repository to avoid git conflicts
add models folder to .gitignore
echo "TensorFlow/models" >> .gitignore
- Visit the link - https://github.com/protocolbuffers/protobuf/releases
- windows user -
- search for - protoc-3.20.1-win64.zip
- for mac users -
- search for - protoc-3.20.1-osx-x86_64.zip
- for linux users -
sudo apt install -y protobuf-compiler
- Unzip into root folder and add
<PATH TO protoc folder>/bin
into system environment variables
- run the following command
cd TensorFlow/models/research
protoc --version
- run the following command
cd TensorFlow/models/research
protoc object_detection/protos/*.proto --python_out=.
pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI
- From within TensorFlow/models/research/
cp object_detection/packages/tf2/setup.py .
python -m pip install .
- From within TensorFlow/models/research/
python object_detection/builders/model_builder_tf2_test.py
-
Create workspace/example_1 directory in project root
mkdir -p workspace/example_1
-
cd to workspace/example_1
cd workspace/example_1
-
Download notebook
curl https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/_downloads/55b1ed8e083cbc9ca3bfc1c18eb6b860/plot_object_detection_saved_model.ipynb > plot_object_detection_saved_model.ipynb
mkdir workspace/training_demo
cd workspace/training_demo
mkdir -p annotations exported-models models pre-trained-models images/test images/train
and write content as -
item {
id: 1
name: 'helmet'
}
item {
id: 2
name: 'head'
}
item {
id: 3
name: 'person'
}
- It is time to convert our annotations into the so called TFRecord format.
curl https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/_downloads/da4babe668a8afb093cc7776d7e630f3/generate_tfrecord.py > generate_tfrecord.py
python generate_tfrecord.py -x images/train -l annotations/label_map.pbtxt -o annotations/train.record
python generate_tfrecord.py -x images/test -l annotations/label_map.pbtxt -o annotations/test.record
Go to model Zoo and download SSD ResNet50 V1 FPN 640X640 (RetinaNet50)
- extract the downloaded model into training_demo/pre-trained-model directory
- create a folder my_ssd_resnet50_v1_fpm in training_demo/models folder,
- copy pipeline.config from to my_ssd_resnet50_v1_fpm from pre-trained_model directory,
- update it as per the documentation - link
Copy training file from TensorFlow/models/reserch/object_detection/ to the root of training_demo folder
cp ../../TensorFlow/models/reserch/object_detection/model_main_tf2.py .
python model_main_tf2.py --model_dir=models/my_ssd_resnet50_v1_fpn --pipeline_config_path=models/my_ssd_resnet50_v1_fpn/pipeline.config
cp ../../TensorFlow/models/research/object_detection/exporter_main_v2.py .
python exporter_main_v2.py --input_type image_tensor --pipeline_config_path ./models/my_ssd_resnet50_v1_fpn/pipeline.config --trained_checkpoint_dir ./models/my_ssd_resnet50_v1_fpn/ --output_directory ./exported-models/my_model