/lss_eval

This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found here

Primary LanguagePythonCreative Commons Zero v1.0 UniversalCC0-1.0