gkamradt/LLMTest_NeedleInAHaystack
Doing simple retrieval from LLM models at various context lengths to measure accuracy
Jupyter NotebookNOASSERTION
Issues
- 6
- 0
Different prompts in providers - I just wonder why cohere don't have "Don't give information outside the document or repeat your findings" and does it make a bit difference?
#50 opened by radarFudan - 0
add base_url env in openai provider - to support OpenAI compatibility local inference like - ollama, tgi, etc
#49 opened by backroom-coder - 1
How can we cite the Needle-in-a-Haystack?
#48 opened by rozyang - 4
- 3
Question: Can the Haystack have variations?
#44 opened by BradKML - 1
- 2
[Feature Proposal] Multi-needle in a haystack
#41 opened by jsharf - 2
I was wondering about the evaluation method
#39 opened by gauss5930 - 1
multi-needle-eval-pizza-3 dataset not found
#34 opened by gkamradt - 1
Convert the repository to a PyPi package
#31 opened by kedarchandrayan - 1
Remove passing of API keys as parameters and read them from environment variables
#32 opened by kedarchandrayan - 2
Update package Anthropic
#13 opened by pavelkraleu - 1
Model kwargs support
#21 opened by LazaroHurtado - 12
Standard Tokenizer
#25 opened by prabha-git - 0
Install pre-commit with end-of-file-fixer
#11 opened by pavelkraleu - 0
Replace os.path with Pathlib
#12 opened by pavelkraleu - 1
Implement Docker for testing
#17 opened by pavelkraleu - 1
- 2
Anthropic Naming Conflict Error
#15 opened by prabha-git - 0
Code optimizations
#20 opened by LazaroHurtado - 0
hard coding of 'gpt-4' for evaluation
#10 opened by prabha-git - 2
Azure OpenAI key
#3 opened by cobraheleah - 4
Add license file
#4 opened by haesleinhuepf - 1