Issues
- 3
Insert temporary prompt
#131 opened - 5
Function calling
#130 opened - 5
McAfee llamafile issue
#128 opened - 2
Add optional callback for problem handling
#127 opened - 0
Function-level documentation
#126 opened - 0
Playmaker integration
#125 opened - 5
Add WebGL support
#112 opened - 1
- 4
Text Completions
#99 opened - 0
Allow prompt formatting by the user
#88 opened - 0
Fix server-client domain resolution
#87 opened - 4
Test Windows IL2CPP
#86 opened - 1
- 0
Check how the stopwords work for phi2
#84 opened - 0
Add functionality to stop the chat
#83 opened - 2
Can't offload to GPU with a 3060
#79 opened - 0
- 2
VisionOS iOS support
#74 opened - 7
Server could not be started!
#70 opened - 5
Remember optimal server settings
#69 opened - 0
Kill llamafile processes on Unity crash
#61 opened - 0
Include all llama.cpp options to UI
#56 opened - 0
Allow to change the prompt dynamically
#55 opened - 3
How to make it faster?
#53 opened - 8
JSON error out of nowhere?
#52 opened - 0
Use embeddings for long-term memory
#45 opened - 12
- 11
Add Android support
#40 opened - 0
- 8
- 13
Weird performance on RTX 2060 super
#29 opened - 0
Automatic release markdown editing
#25 opened - 0
Code autoformatting
#24 opened - 9
Error when testing the LLM
#22 opened - 4
- 4
- 0
- 0
- 0
- 0
- 1
Test on Android
#7 opened - 1
Show / hide advanced options
#6 opened - 1
Integrate chat templates
#5 opened - 1
- 0
- 0
Integrate ChatGPT support
#2 opened - 2