/Apple-On-device-LLM

On-device LLM for Apple Silicon: Leveraging MLX and OpenELM with RAG for local, private language processing.

MIT LicenseMIT

Apple-On-device-LLM

  • On-device LLM for Apple Silicon: Leveraging MLX and OpenELM with RAG for local, private language processing.

Description

  • This project implements a local and private Large Language Model (LLM) designed to run on Apple Silicon devices. It utilizes MLX Framework from Apple ML Research for efficient machine learning operations and localization on Apple Silicon, OpenELM as language model, OpenWebUI for User Interface, and incorporates Retrieval-Augmented Generation (RAG) for enhanced performance.

Features

  • Local processing: Run the LLM entirely on your Apple Silicon device
  • Privacy-focused: No data leaves your machine
  • Efficient: Optimized for Apple Silicon using MLX Framework
  • Enhanced performance: Utilizes RAG for improved language understanding and generation

Prerequisites

  • Apple Silicon Mac
  • macOS version 13.5 or later
  • Python 3.8+

Installation

# Basic usage example here

Usage

# Basic usage example here

Contributing

  • Welcome to contributions! Please see our Contributing Guidelines for more details.

License

  • This project is licensed under the MIT License.

Acknowledgements

  • MLX
  • OpenELM
  • OpenWebUI
  • Ollama

Contact