/llm

A Nextcloud app that packages a large language model (Llama2 / GPT4All Falcon)

Primary LanguagePHP

A large language model in Nextcloud

Note: This app is deprecated and no longer being maintained. Its successor is https://github.com/nextcloud/llm2

This app ships a TextProcessing provider using a Large Language Model that runs locally on CPU

The models run completely on your machine. No private data leaves your servers.

Models:

Requirements:

  • x86 CPU (with support for AVX instructions)
  • GNU lib C (musl is not supported)
  • Python 3.10+ (including python-venv)

Nextcloud All-in-One:

With Nextcloud AIO, this app is not going to work because AIO uses musl. However you can use this community container as replacement for this app.

Ethical AI Rating

Rating: 🟢

Positive:

  • the software for training and inference of this model is open source
  • the trained model is freely available, and thus can be run on-premises
  • the training data is freely available, making it possible to check or correct for bias or optimise the performance and CO2 usage.

Learn more about the Nextcloud Ethical AI Rating in our blog.

Install

Make sure to have the submodules checked out:

git submodule update --init

Place this app in nextcloud/apps/

Building the app

The app can be built by using the provided Makefile by running:

make

This requires the following things to be present:

  • make
  • which
  • tar: for building the archive
  • curl: used if phpunit and composer are not installed to fetch them from the web
  • npm: for building and testing everything JS, only required if a package.json is placed inside the js/ folder