An applications library using GTN. Current examples include:
- Offline handwriting recognition
- Automatic speech recognition
-
Build python bindings for the GTN library.
-
conda activate gtn_env # using the same environment from Step 1
-
conda install pytorch torchvision -c pytorch
-
pip install -r requirements.txt
We give an example of how to train on the IAM off-line handwriting recognition benchmark.
First register here and download the dataset:
./datasets/download/iamdb.sh <path_to_data> <email> <password>
Then update the configuration JSON configs/iamdb/tds2d.json
to point to the
data path used above:
"data" : {
"dataset" : "iamdb",
"data_path" : "<path_to_data>",
"num_features" : 64
},
Single GPU training can be run with:
python train.py --config configs/iamdb/tds2d.json
To run distributed training with multiple GPUs:
python train.py --config configs/iamdb/tds2d.json --world_size <NUM_GPUS>
For a list of options type:
python train.py -h
Use Black to format python code.
First install:
pip install black
Then run with:
black <file>.py
GTN is licensed under a MIT license. See LICENSE.