/SLAM-LLM

Speech, Language, Audio, Music Processing with Large Language Model

Primary LanguagePythonMIT LicenseMIT

SLAM-LLM

SLAM-LLM is a deep learning toolkit that allows researchers and developers to train custom multimodal large language model (MLLM), focusing on Speech, Language, Audio, Music processing. We provide detailed recipes for training and high-performance checkpoints for inference.

SLAM-LLM Logo

version version python mit

Table of Contents

  1. News
  2. Installation
  3. Uasge
  4. Features
  5. Acknowledge

News

Installation

git clone https://github.com/huggingface/transformers.git
cd transformers
git checkout tags/v4.35.2
pip install -e .
cd ..
git clone https://github.com/huggingface/peft.git
cd peft
git checkout tags/v0.6.0
pip install -e .
cd ..
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
git clone https://github.com/ddlBoJack/SLAM-LLM.git
cd SLAM-LLM
pip install  -e .

For some examples, you may need to use fairseq, the command line is as follows:

# you need to install fairseq before SLAM-LLM
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./

We also provide a docker image for convenience:

# build docker image
docker build -t slam-llm:latest .

# run docker image with gpu
docker run -it --gpus all --name slam --shm-size=256g slam-llm:latest /bin/bash

Usage

List of Recipes

We provide reference implementations of various LLM-based speech, audio, and music tasks:

Configuration Priority

We provide hierarchical configuration inheritance relationships as follows:

command-line (shell file) > Hydra configuration (yaml file) > dataclass configuration (Python file)

Features

  • Easily extend to new models and tasks.
  • Detailed recipes for training and high-performance checkpoints for inference.
  • Mixed precision training which trains faster with less GPU memory on NVIDIA tensor cores.
  • Multi-GPU training with data and model parallel, supporting DDP, FSDP and deepspeed (still need to be improved).
  • Flexible configuration based on Hydra and dataclass allowing a combination of code, command-line and file based configuration.

Acknowledge

  • We borrow code from Llama-Recipes for the training process.
  • We borrow code from Fairseq for deepspeed configuration.
  • We thank the contributors for providing diverse recipes.