h2oai/enterprise-h2ogpte

how do you print the input that the LLM model actual works on

Opened this issue · 0 comments

From what I can see there is RAG pipeline that feeds the LLM reader an input. Is there anyway to print the context that was supplied to each of these models when we are going through the benchmarks.