Issues
- 29
Crash on AMD graphics card on Windows
#202 opened by tempstudio - 7
Can it work with the Meta Quest 3?
#268 opened by al3dv2 - 1
Increase AI answer speed ?
#269 opened by al3dv2 - 1
Migrate RAG to LLMUnity
#222 opened by amakropoulos - 2
Context Size
#270 opened by 3inary - 1
Function call
#265 opened by vcalazas - 2
[Android build] IOException: The file exists
#262 opened by siduko - 0
Chat speed option (throttling)
#264 opened by amakropoulos - 6
Issue starting LLM
#249 opened by Eelguezabal - 6
LLM service couldn't be created with release 2.2.0
#223 opened by ltoniazzi - 2
WebGL usage
#259 opened by Aishu696 - 13
Access to archchecker.dll denied
#250 opened by MohxGames - 1
Unable to load Hubble-4B
#245 opened by tempstudio - 6
Do not expose system prompt in save file
#236 opened by 3inary - 6
- 1
Make LLMCharacter extendable
#235 opened by 3inary - 7
Crash with multiple Load calls
#204 opened by 3inary - 3
Build failing on Windows Intel 64-bit
#227 opened by 3inary - 3
Image Input
#134 opened by Legerdo - 3
Support for non CUDA or ROCm supported GPUs.
#228 opened by Flamerace - 3
Hot-swap LoRA with updated llama.cpp
#212 opened by ltoniazzi - 2
The editor crashes when exiting playmode while it is creating the LLM service
#171 opened by SubatomicPlanets - 0
[Regression] Custom Model Path not working anymore
#206 opened by 3inary - 1
- 2
Unity crashing when --mmproj flag is added..
#207 opened by NOSCOPEdev - 0
Missing contributing guidelines
#200 opened by ltoniazzi - 2
Very important notice for 2.1.0
#203 opened by NOSCOPEdev - 0
I get error when I tried a lora .bin
#195 opened by whynames - 4
Find LLM automatically
#173 opened by SubatomicPlanets - 1
Debug Log modes
#178 opened by SubatomicPlanets - 0
Allow shared base prompt for LLMs
#184 opened by amakropoulos - 0
Double bos warning in Llama3
#185 opened by amakropoulos - 1
Save the history in a seperate folder
#177 opened by SubatomicPlanets - 9
- 3
Adding New Features to LLMUnity
#149 opened by TKTSWalker - 10
I updated the repo and get this error "Tried architecture: x64-acc, System.Exception: Failed to load library x64-acc."
#183 opened by whynames - 3
llamafile issue on mac M2
#139 opened by Alexnnn - 2
Add DontDestroyOnLoad to the LLM component
#152 opened by SubatomicPlanets - 4
- 1
LLMUnitySetup can be added as a component
#168 opened by SubatomicPlanets - 1
Saving the Chat Log
#144 opened by DinisB - 1
Better LLM and LLMClient workflow
#151 opened by SubatomicPlanets - 13
llama.cpp integration with DLL
#141 opened by amakropoulos - 1
Make it optional
#169 opened by SubatomicPlanets - 3
Added History Function But Need To Clean
#161 opened by TKTSWalker - 5
- 8
1.2.8 fails to receive data
#158 opened by Gabri94x - 9
Add support for Phi-3
#148 opened by Maskusa - 1
should we consider the possibility of including Google Gemini running on Vertex AI into this library?
#142 opened by david-wei-01001 - 6
Setting Up And Using ChatML By Myself
#143 opened by TKTSWalker