some problems with evaluation results
sarapapi opened this issue · 2 comments
sarapapi commented
Dear @xutaima,
I noticed that there are two problems in the files generated after the remote evaluation:
- the config.yaml always reports both source and target as "speech" even if we explicitly pass the text as the target type (I haven't tried to pass text as source)
- if the
--computation-aware
tag is passed, both the "_CA" metrics and the ideal metrics are reported as computationally aware, while, if the tag is not passed, the ideal metrics are correctly shown (but not the computational-aware ones).
Thanks
sarapapi commented
Hi, I noticed also that the "metric" field is empty in the instances.log file. (Still, CA and NCA metrics are the same)