Note: This app is deprecated and no longer being maintained. Its successor is https://github.com/nextcloud/llm2
This app ships a TextProcessing provider using a Large Language Model that runs locally on CPU
The models run completely on your machine. No private data leaves your servers.
Models:
- Llama 2 by Meta
- Languages: English
- LLAMA 2 Community License
- GPT4All Falcon by Nomic AI
- Languages: English
- Apache License 2.0
- Leo HessianAI by LAION LeoLM
- Languages: English/German
- LLAMA 2 Community License
Requirements:
- x86 CPU (with support for AVX instructions)
- GNU lib C (musl is not supported)
- Python 3.10+ (including python-venv)
With Nextcloud AIO, this app is not going to work because AIO uses musl. However you can use this community container as replacement for this app.
Positive:
- the software for training and inference of this model is open source
- the trained model is freely available, and thus can be run on-premises
- the training data is freely available, making it possible to check or correct for bias or optimise the performance and CO2 usage.
Learn more about the Nextcloud Ethical AI Rating in our blog.
Make sure to have the submodules checked out:
git submodule update --init
Place this app in nextcloud/apps/
The app can be built by using the provided Makefile by running:
make
This requires the following things to be present:
- make
- which
- tar: for building the archive
- curl: used if phpunit and composer are not installed to fetch them from the web
- npm: for building and testing everything JS, only required if a package.json is placed inside the js/ folder