Any date on releasing the training script for the model?
Closed this issue · 2 comments
this is amazing work! I'm working in the same direction but on a smaller level.
Are you planning to release any paper or a training script? If not, can you just explain how
did you train the model. I know some overview is given on the HF model page but I want to know the steps in doing so of how that was achieved.
Thanks in advance :)
I updated the HuggingFace repo with code on how to train the model - https://huggingface.co/vectara/hallucination_evaluation_model.
Training is as simple as training a binary classifier on a large set of labelled data. The dataset includes a source document, and a summary (usually generated by an LLM but some human curated ones also). And then labels from human raters or (only for training data only but not for eval) from synthetic sources such as other large models. For evaluation purposes only human labels should be used as they are the ground truth. Synthetic data is fine if used for training only. Note if you call an LLM for getting training data you may violate their terms of service. Look up the SummaC and True datasets for the training data on factual consistency (we used these amongst others, and their test datasets as evaluation). The training regime is explained well here - https://arxiv.org/pdf/2204.04991.pdf