Pinned issues
Issues
- 0
Encoding Error: `'ascii' codec can't encode character '\xe0' in position 41: ordinal not in range(128)`
#576 opened by SamuelDevdas - 1
llm loses track of plugins when upgraded (with uv)
#575 opened by noamross - 20
- 3
- 8
Add o1 support
#570 opened by kevinburkesegment - 0
Provide common way to specify cert overrides
#568 opened by jaycle - 2
- 5
- 2
Change default CLI command from `prompt` to `--help`
#529 opened by tutacat - 0
OSError: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by ~/python3.10/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllmodel.so)
#563 opened by raymondworkshop - 1
- 3
Best practice for remote APIs that require a proxy
#519 opened by sahuguet - 2
- 2
Don't require quoting the prompt
#527 opened by almson - 1
UTF8 Surrogates Not Allowed
#546 opened by rsbohn - 1
File upload support?
#523 opened by chrisspen - 0
Truncated output with Llama model
#561 opened by pandeiro - 2
backslash cannot be used in `llm chat`
#516 opened by grota - 6
Design new LLM database schema
#556 opened by simonw - 1
Broken markup in the docs
#558 opened by simonw - 3
Can't add HTTP authentication headers from CLI
#551 opened by akaihola - 1
`llm.get_model()` should return default model, make `get_default_model()`/`set_default_model()` documented APIs
#553 opened by simonw - 3
OpenAI plugin should use `self.get_key()`
#552 opened by simonw - 0
Fix warning about `dict()` v.s. `model_dump()`
#554 opened by simonw - 4
Add support for gpt-4o-2024-08-06
#548 opened by tkfv - 1
OpenRouter plugin models not visible in Ubuntu 22.04, Win11 with OPENROUTER_KEY=
#550 opened by akaihola - 0
- 0
llm install llm-gguf breaking llm
#541 opened by lakamsani - 1
Parsing JSON in reponses
#514 opened by detrin - 1
Pydantic failures
#520 opened by achille - 5
"AttributeError: module 'numpy' has no attribute 'int8'" on fresh brew install
#531 opened by jcushman - 3
llm 0.14: Can't run <<llm chat>> on Windows 11
#495 opened by rsbohn - 2
Remove warning about PyTorch
#538 opened by simonw - 1
Support for new gpt4o-mini model
#536 opened by simonw - 0
Remove obsolete DEFAULT_EMBEDDING_MODEL code
#537 opened by simonw - 0
Discussion: Should `llm` keep `pip` as a dependency?
#528 opened by tutacat - 0
[FR] Using Whisper.cpp, add a voice chat mode
#526 opened by NightMachinery - 0
keys set command should provide better feedback
#521 opened by sahuguet - 1
Autocomplete online cli
#512 opened by lzumot - 1
`llm logs` cannot combine `-q` and `-m` options
#515 opened by simonw - 0
Limit max tokens in response
#513 opened by detrin - 0
Plugin for HF serverless inference
#510 opened by hugobowne - 0
Log token performance stats
#509 opened by simonw - 0
asyncio support
#507 opened by simonw - 0
how to pass messages list to model.prompt()
#506 opened by cometta - 0
- 0
Hang on 429 response from OpenAI
#504 opened by tylerbrandt - 0
Please add NVIDA cloud API
#503 opened by kirkog86 - 5
All I ever get is "insufficient_quota"
#494 opened by ijt - 0
llm keys set openai
#498 opened by S4grg