Aiming to provide a seamless and privacy driven chatting experience with open-sourced technologies(Ollama), particularly open sourced LLM's(eg. Llama3, Phi-3, Mistral). Focused on ease of use.
LLocal can be installed on both Windows and Mac (Linux version is not yet being worked on).
- Llocaly store chats.
- Llocal utilizes Ollama which ensures that from processing to utilizing everything happens on your machine LLocally.
- Seamlessly switch between models.
- Easily pull new models.
- Image upload for models that support vision.
- Web search (i.e Website scraper aswell as duckduckgo search inbuilt) for all models.
- Responses are rendered as markdown (Supporting Code Blocks with syntax highlighting, tabular formats and much more).
- Multiple themes (5 themes all suporting both light and dark mode)
- Seamless integration with Ollama, from download to install.
- Chat with images ✅
- Web Search ☑️ (purple because, it still can be improved)
- Retrieval Augmented Generation/RAG (with single PDF's)
- Multiple PDF chat
- Text to Speech Models (only if we can get to be similar to a human like response).
- Community wallpapers
- Community themes (something like what spicetify does)
- Lofi Music (this would be optional)
- Speech to text (Do we really need it?)
- Conversations like those with ChatGPT (Speech to text input and text to speech output, but the aim would be low-latency).
- Chat with chats ?! (Not sure)
At some point: would want to pivot LLocal in a different direction... (Although would need to discuss this with the users.)
LLocal is an Electron application with React and TypeScript.
$ npm install
$ npm run dev
# For windows
$ npm run build:win
# For macOS (m-series)
$ npm run build:mac:arm
# For macOS (intel-chips)
$ npm run build:mac:intel
# For Linux (Supported now!)
$ npm run build:linux
You can refer to the Contribute.md