#Retrainer
- Clone of Tensorflow repository
- Imagemagick available in path
- Bazel available in path
- Go through this tutorial once to get the necessary builds
- From your tensorflow repo's root folder, build the unused tags stripper
bazel build tensorflow/python/tools:strip_unused
- In each script, edit the
$TENSORFLOW
variable to mach your Tensorflow repository path
- Record videos of each device you want to train the model with
- Put these scripts in the root of your project folder
- Put the videos in a new folder,
videos/
- In your project root (not tensorflow root), create the folders
training-images-large
andtraining-images-small
- Rename the videos from
yourvideo.mov
toyour-label-name.mov
- cd
your/project/folder
- Run
video-to-model.sh
- When the script ends you will have these files in
/tmp
output_graph_stripped.pb
is the Tensorflow model that is ready to use, with e.g. Tensorflow's iOS sampleoutput_labels.txt
is all the labels. One per video.output_graph.pb
is the model before stripping unused tags. Using this will give you an error in the iOS sample.bottleneck
is a cache that will speed up subsequent retraining sessions. Delete these files if you switch out your already trained videos.retrain-logs/
contains your logs. You can use Tensorboard to analyze your model while and after it learns.
- Copy
output_graph_stripped.pb
andoutput_labels.txt
files to your project - In your code, use something like
static NSString* model_file_name = @"output_graph_stripped";
static NSString* labels_file_name = @"output_labels";
- Names for input and output layers are
Mul
andfinal_result:0
. In the Objective-C iOS sample that would be something likeconst std::string input_layer_name = "Mul";
const std::string output_layer_name = "final_result:0";
If you need new source data for individual labels, you can save time by running the scripts individually
Examples:
extract-from-video.sh /path/to/video.MOV
extracts frames from video. You can change the FPS rate in the script.extract-all-videos.sh
takes all files invideos/
and runsextract-from-video.sh
on themresize.sh dandelion
resizes all images intraining-images-large/dandelion
and puts the smaller images intraining-images-small/dandelion
. Default size is max 200 pixels on either side, and you can change that value in this script.retrain.sh
retrains the perception model with the images in thetraining-images-small
folder. It also strips unused tags from the model.
See TODO file