Predict or Inference using BAT model
YoushaaMurhij opened this issue · 3 comments
Nice work!
I wonder how I can achieve a simple inference step using your model since you used Pytorch_Lighting and your current configuration is aimed to train a model and validate it.
I think sampling process [here] is required in all modes even in inference mode, right?
Thanks
You can test a model by using the ‘—test’ flag. That is exactly what you need.
The name “sampler” may be a little confusing. The sampler for the test mode actually returns all the testing sequences.
Thanks for your response. I mean by inference here feeding a sample input to the model and getting the prediction result without any further validation or calculating metrics! is it possible without rewriting a separate code with all the pre-processing steps that you used while training?
Thanks
Currently, we do not support evaluating a single input. You have to rewrite some codes to achieve this. The function evaluate_one_sequence
can give you some hints.