Issues
- 1
- 1
lack package.json file
#277 opened by chivalry1314 - 1
It seems that the estimater_uncertainty method can only handle one str at a time. How to process multiple str in parallel? Can you provide corresponding examples
#275 opened by eLeventhw - 3
Using LM-Polygraph with Custom OpenAI Endpoint / with Pre-Generated Responses
#263 opened by muelphil - 3
Perplexity Calculation
#273 opened by vthost - 6
I would like to ask about the output of GPT3.5, how can GPT4 recognize related words when encountering this situation
#260 opened by wangzhonghai - 4
ways to compute ROC-AUC and the label
#262 opened by EdwardChang5467 - 1
How to support VLM?
#267 opened by Zhitao-He - 1
- 6
- 8
Is there a requirement for the Python version to run this repository on a Linux system?
#261 opened by EdwardChang5467 - 0
- 4
- 1
Example for normalizaiton
#226 opened by YiJohnny - 2
- 0
generate_texts on wbmodel ignores generation parameters and stopping critera.
#224 opened by rvashurin - 2
- 3
Dockerfile adjustments
#206 opened by SuperCoolCucumber - 4
- 5
Error loading larger models - You shouldn't move a model when it is dispatched on multiple devices
#147 opened by avi-jain - 3
- 4
Possible mismatching max_length and max_new_tokens in example eval script
#118 opened by kirill-fedyanin - 6
Entropy calculation maybe wrong?
#195 opened by athrvkk - 2
- 1
[Question] Pipeline integration (Langchain)
#165 opened by Rebell-Leader - 2
- 1
AutoModelForCausalLM max_length
#163 opened by amayuelas - 4
Get the uncertainty scores without rerun the models
#144 opened by caiqizh - 4
Demo doesn't work.
#142 opened by caiqizh