python build_config.py --csv_path datasets/english_tamil.csv --input_id question --output_id answer
--data_count 300 --ckpt_dir eng_tam --save_frequency 25 --epochs 400
Arguments short explanation:
-input id : column name of source values
-output_id : column name of target values
-data_count : Number of rows to be considered for training
{
"csv_path": "datasets/english_tamil.csv",
"select": 300,
"input_node": "question",
"output_node": "answer",
"input_vectorizer": "question.json",
"output_vectorizer": "answer.json",
"batch_size": 128,
"epochs": 400,
"units": 64,
"embed_size": 32,
"ckpt_dir": "eng_tam",
"save_frequency": 25
}
inference.ipynb notebook can be used for testing the model