/instructlab

Command-line interface. Use this to chat with the model or train the model (training consumes the taxonomy data)

Primary LanguagePythonApache License 2.0Apache-2.0

InstructLab ๐Ÿถ (ilab)

Lint Tests Build Release License

๐Ÿ“– Contents

Welcome to the InstructLab CLI

InstructLab ๐Ÿถ uses a novel synthetic data-based alignment tuning method for Large Language Models (LLMs.) The "lab" in InstructLab ๐Ÿถ stands for Large-Scale Alignment for ChatBots [1].

[1] Shivchander Sudalairaj*, Abhishek Bhandwaldar*, Aldo Pareja*, Kai Xu, David D. Cox, Akash Srivastava*. "LAB: Large-Scale Alignment for ChatBots", arXiv preprint arXiv: 2403.01081, 2024. (* denotes equal contributions)

โ“ What is ilab

ilab is a Command-Line Interface (CLI) tool that allows you to:

  1. Download a pre-trained Large Language Model (LLM).
  2. Chat with the LLM.

To add new knowledge and skills to the pre-trained LLM you have to add new information to the companion taxonomy repository. After that is done, you can:

  1. Use ilab to generate new synthetic training data based on the changes in your local taxonomy repository.
  2. Re-train the LLM with the new training data.
  3. Chat with the re-trained LLM to see the results.

The full process is described graphically in the workflow diagram.

Important

It is important to understand that running InstructLab on a laptop will give you a low-fidelity approximation of both synthetic data generation (using the ilab generate command) and model instruction tuning (using the ilab train command, which uses QLoRA.) The quality of the results you get using these tools on a laptop will not be as high-fidelity as they might be using a larger teacher model and a different training method. We have optimized InstructLab to enable community members with modest hardware to be able to use the technique. If you have more sophisticated hardware, you can configure InstructLab to use a larger teacher model such as Mixtral in order to achieve higher-fidelity results.

๐Ÿ“‹ Requirements

  • ๐ŸŽ Apple M1/M2/M3 Mac or ๐Ÿง Linux system (tested on Fedora). We anticipate support for more operating systems in the future.
  • C++ compiler
  • Python 3.9+ (<3.12 for PyTorch JIT)
  • Approximately 60GB disk space (entire process)

NOTE: PyTorch 2.2.1 does not support torch.compile with Python 3.12. On Fedora 39+, install python3.11-devel and create the virtual env with python3.11 if you wish to use PyTorch's JIT compiler.

โœ… Getting started

๐Ÿงฐ Installing ilab

  1. When installing on Fedora Linux, install C++, Python 3.9+, and other necessary tools by running the following command:

    sudo dnf install g++ gcc make pip python3 python3-devel python3-GitPython

    Optional: If g++ is not found, try gcc-c++ by running the following command:

    sudo dnf install gcc-c++ gcc make pip python3 python3-devel python3-GitPython

    If you are running on macOS, this installation is not necessary and you can begin your process with the following step.

  2. Create a new directory called instructlab to store the files the ilab CLI needs when running and cd into the directory by running the following command:

    mkdir instructlab
    cd instructlab

    NOTE: The following steps in this document use Python venv for virtual environments. However, if you use another tool such as pyenv or Conda Miniforge for managing Python environments on your machine continue to use that tool instead. Otherwise, you may have issues with packages that are installed but not found in venv.

  3. Install and activate your venv environment by running the following command:

    NOTE: โณ pip install may take some time, depending on your internet connection. In case installation fails with error unsupported instruction `vpdpbusd', append -C cmake.args="-DLLAMA_NATIVE=off" to pip install command.

    See the GPU acceleration documentation for how to to enable hardware acceleration for inference and training on AMD ROCm, Apple Metal Performance Shaders (MPS), and Nvidia CUDA.

    To install with no GPU acceleration and PyTorch without CUDA bindings

    python3 -m venv --upgrade-deps venv
    source venv/bin/activate
    (venv) $ pip cache remove llama_cpp_python
    (venv) $ pip install git+https://github.com/instructlab/instructlab.git@stable --extra-index-url=https://download.pytorch.org/whl/cpu

    To install with AMD ROCm

    python3 -m venv --upgrade-deps venv
    source venv/bin/activate
    (venv) $ pip cache remove llama_cpp_python
    (venv) $ pip install git+https://github.com/instructlab/instructlab.git@stable \
        --extra-index-url https://download.pytorch.org/whl/rocm6.0 \
        -C cmake.args="-DLLAMA_HIPBLAS=on" \
        -C cmake.args="-DAMDGPU_TARGETS=all" \
        -C cmake.args="-DCMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang" \
        -C cmake.args="-DCMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++" \
        -C cmake.args="-DCMAKE_PREFIX_PATH=/opt/rocm"

    On Fedora 40+, use -DCMAKE_C_COMPILER=clang-17 and -DCMAKE_CXX_COMPILER=clang++-17.

    To install with Apple Metal on M1/M2/M3 Mac

    python3 -m venv --upgrade-deps venv
    source venv/bin/activate
    (venv) $ pip cache remove llama_cpp_python
    (venv) $ pip install git+https://github.com/instructlab/instructlab.git@stable -C cmake.args="-DLLAMA_METAL=on"

    To install with Nvidia CUDA

    python3 -m venv --upgrade-deps venv
    source venv/bin/activate
    (venv) $ pip cache remove llama_cpp_python
    (venv) $ pip install git+https://github.com/instructlab/instructlab.git@stable -C cmake.args="-DLLAMA_CUBLAS=on"
  4. From your venv environment, verify ilab is installed correctly, by running the ilab command.

    ilab

    Example output

    (venv) $ ilab
    Usage: ilab [OPTIONS] COMMAND [ARGS]...
    
    CLI for interacting with InstructLab.
    
    If this is your first time running InstructLab, it's best to start with `ilab init` to create the environment.
    
    Options:
    --config PATH  Path to a configuration file.  [default: config.yaml]
    --version      Show the version and exit.
    --help         Show this message and exit.
    
    Commands:
    chat      Run a chat using the modified model
    check     (Deprecated) Check that taxonomy is valid
    convert   Converts model to GGUF
    diff      Lists taxonomy files that have changed since <taxonomy-base>...
    download  Download the model(s) to train
    generate  Generates synthetic data to enhance your example data
    init      Initializes environment for InstructLab
    list      (Deprecated) Lists taxonomy files that have changed since <taxonomy-base>.
    serve     Start a local server
    test      Runs basic test to ensure model correctness
    train     Takes synthetic data generated locally with `ilab generate`...

    IMPORTANT: every ilab command needs to be run from within your Python virtual environment. To enter the Python environment, run the following command:

    source venv/bin/activate

๐Ÿ—๏ธ Initialize ilab

  1. Initialize ilab by running the following command:

    ilab init

    Example output

    Welcome to InstructLab CLI. This guide will help you set up your environment.
    Please provide the following values to initiate the environment [press Enter for defaults]:
    Path to taxonomy repo [taxonomy]: <ENTER>
  2. When prompted by the interface, press Enter to add a new default config.yaml file.

  3. When prompted, clone the https://github.com/instructlab/taxonomy.git repository into the current directory by typing y.

    Optional: If you want to point to an existing local clone of the taxonomy repository, you can pass the path interactively or alternatively with the --taxonomy-path flag.

    Example output

    (venv) $ ilab init
    Welcome to InstructLab CLI. This guide will help you set up your environment.
    Please provide the following values to initiate the environment [press Enter for defaults]:
    Path to taxonomy repo [taxonomy]: <ENTER>
    `taxonomy` seems to not exists or is empty. Should I clone https://github.com/instructlab/taxonomy.git for you? [y/N]: y
    Cloning https://github.com/instructlab/taxonomy.git...
    Generating `config.yaml` in the current directory...
    Initialization completed successfully, you're ready to start using `ilab`. Enjoy!

    ilab will use the default configuration file unless otherwise specified. You can override this behavior with the --config parameter for any ilab command.

    The taxonomy repository uses submodules to incorporate the taxonomy schema. When the ilab init command clones the taxonomy repository, it automatically handles the submodules. If you clone the taxonomy repository yourself, be sure to use the --recurse-submodules option on the git clone command and the git pull command when pulling updates from the remote repository. For example:

    git clone --recurse-submodules https://github.com/instructlab/taxonomy.git
    git pull --recurse-submodules

๐Ÿ“ฅ Download the model

  • Run the ilab downloadcommand.

    ilab download

    ilab download will download a pre-trained model (~4.4G) from HuggingFace and store it in a models directory:

    (venv) $ ilab download
    Downloading model from instructlab/merlinite-7b-lab-GGUF@main to models...
    (venv) $ ls models
    merlinite-7b-lab-Q4_K_M.gguf

    NOTE โณ This command can take few minutes or immediately depending on your internet connection or model is cached. If you have issues connecting to Hugging Face, refer to the Hugging Face discussion forum for more details.

๐Ÿด Serving the model

  • Serve the model by running the following command:

    ilab serve

    Once the model is served and ready, you'll see the following output:

    (venv) $ ilab serve
    INFO 2024-03-02 02:21:11,352 lab.py:201 Using model 'models/ggml-merlinite-7b-lab-Q4_K_M.gguf' with -1 gpu-layers and 4096 max context size.
    Starting server process
    After application startup complete see http://127.0.0.1:8000/docs for API.
    Press CTRL+C to shut down the server.

    NOTE: If multiple ilab clients try to connect to the same InstructLab server at the same time, the 1st will connect to the server while the others will start their own temporary server. This will require additional resources on the host machine.

๐Ÿ“ฃ Chat with the model (Optional)

Because you're serving the model in one terminal window, you will have to create a new window and re-activate your Python virtual environment to run ilab chat command:

source venv/bin/activate
ilab chat

Before you start adding new skills and knowledge to your model, you can check its baseline performance by asking it a question such as what is the capital of Canada?.

NOTE: the model needs to be trained with the generated synthetic data to use the new skills or knowledge

(venv) $ ilab chat
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ system โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ Welcome to InstructLab Chat w/ GGML-MERLINITE-7B-lab-Q4_K_M (type /h for help)                                                                                                                                                                    โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
>>b> what is the capital of Canada                                                                                                                                                                                                 [S][default]
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ ggml-merlinite-7b-lab-Q4_K_M โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ The capital city of Canada is Ottawa. It is located in the province of Ontario, on the southern banks of the Ottawa River in the eastern portion of southern Ontario. The city serves as the political center for Canada, as it is home to โ”‚
โ”‚ Parliament Hill, which houses the House of Commons, Senate, Supreme Court, and Cabinet of Canada. Ottawa has a rich history and cultural significance, making it an essential part of Canada's identity.                                   โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ elapsed 12.008 seconds โ”€โ•ฏ
>>>                                                                                                                                                                                                                               [S][default]

๐Ÿ’ป Creating new knowledge or skills and training the model

๐ŸŽ Contribute knowledge or compositional skills

  1. Contribute new knowledge or compositional skills to your local taxonomy repository.

Detailed contribution instructions can be found in the taxonomy repository.

Important

There is a limit to how much content can exist in the question/answer pairs for the model to process. Due to this, only add a maximum of around 2300 words to your question and answer seed example pairs in the qna.yaml file.

๐Ÿ“œ List and validate your new data

  1. List your new data by running the following command:

    ilab diff
  2. To ensure ilab is registering your new knowledge or skills, you can run the ilab diff command. The following is the expected result after adding the new compositional skill foo-lang:

    (venv) $ ilab diff
    compositional_skills/writing/freeform/foo-lang/foo-lang.yaml
    Taxonomy in /taxonomy/ is valid :)

๐Ÿš€ Generate a synthetic dataset

Before following these instructions, ensure the existing model you are adding skills or knowledge to is still running.

  1. To generate a synthetic dataset based on your newly added knowledge or skill set in taxonomy repository, run the following command:

    ilab generate

    NOTE: โณ This can take from 15 minutes to 1+ hours to complete, depending on your computing resources.

    Example output

    (venv) $ ilab generate
    INFO 2024-02-29 19:09:48,804 lab.py:250 Generating model 'ggml-merlinite-7b-lab-Q4_K_M' using 10 CPUs,
    taxonomy: '/home/username/instructlab/taxonomy' and seed 'seed_tasks.json'
    
    0%|##########| 0/100 Cannot find prompt.txt. Using default prompt.
    98%|##########| 98/100 INFO 2024-02-29 20:49:27,582 generate_data.py:428 Generation took 5978.78s

    The synthetic data set will be three files in the newly created generated directory named generated*.json, test*.jsonl, and train*.jsonl.

Note

If you want to pickup from where a failed or canceled ilab generate left off, you can copy the generated*.json file into a file named regen.json. regen.json will be picked up at the start of lab generate when available. You should remove it when the process is completed.

  1. Verify the files have been created by running the ls generated command.

    (venv) $ ls generated/
    'generated_ggml-merlinite-7b-lab-0226-Q4_K_M_2024-02-29T19 09 48.json'   'train_ggml-merlinite-7b-lab-0226-Q4_K_M_2024-02-29T19 09 48.jsonl'
    'test_ggml-merlinite-7b-lab-0226-Q4_K_M_2024-02-29T19 09 48.jsonl'

    Optional: It is also possible to run the generate step against a different model via an OpenAI-compatible API. For example, the one spawned by ilab serve or any remote or locally hosted LLM (e.g. via ollama, LM Studio, etc.). Run the following command:

    ilab generate --endpoint-url http://localhost:8000/v1

๐Ÿ‘ฉโ€๐Ÿซ Train the model

There are three options to train the model on your synthetic data-enhanced dataset.

Note: Every ilab command needs to be run from within your Python virtual environment.

Train the model locally on Linux

ilab train

NOTE: โณ This step can potentially take several hours to complete depending on your computing resources. Please stop ilab chat and ilab serve first to free resources.

ilab train outputs a brand-new model that can be served in the models directory called ggml-model-f16.gguf.

 (venv) $ ls models
 ggml-merlinite-7b-lab-Q4_K_M.gguf  ggml-model-f16.gguf

Train the model locally on an M-series Mac

To train the model locally on your M-Series Mac is as easy as running:

ilab train

Note: โณ This process will take a little while to complete (time can vary based on hardware and output of ilab generate but on the order of 5 to 15 minutes)

ilab train outputs a brand-new model that is saved in the <model_name>-mlx-q directory called adapters.npz (in Numpy compressed array format). For example:

(venv) $ ls instructlab-merlinite-7b-lab-mlx-q
adapters-010.npz        adapters-050.npz        adapters-090.npz        config.json             tokenizer.model
adapters-020.npz        adapters-060.npz        adapters-100.npz        model.safetensors       tokenizer_config.json
adapters-030.npz        adapters-070.npz        adapters.npz            special_tokens_map.json
adapters-040.npz        adapters-080.npz        added_tokens.json       tokenizer.jso

Training the model locally with GPU acceleration

Training has experimental support for GPU acceleration with Nvidia CUDA or AMD ROCm. Please see the GPU acceleration documentation for more details. At present, hardware acceleration requires a data center GPU or high-end consumer GPU with at least 18 GB free memory.

ilab train --device=cuda

Training the model in the cloud

Follow the instructions in Training.

โณ Approximate amount of time taken on each platform:

  • Google Colab: 5-10 minutes with a T4 GPU
  • Kaggle: ~30 minutes with a P100 GPU.

After that's done, you can play with your model directly in the Google Colab or Kaggle notebook. Model trained on the cloud will be saved on the cloud. The model can also be downloaded and served locally.

๐Ÿ“œ Test the newly trained model

  • Run the following command to test the model:

    ilab test

    NOTE: ๐ŸŽ This step is only implemented for macOS with M-series chips (for now)

    The output from the command will consist of a series of outputs from the model before and after training.

๐Ÿด Serve the newly trained model

  1. Stop the server you have running by entering ctrl+c keys in the terminal running the server.

    IMPORTANT:

    • ๐ŸŽ This step is only implemented for macOS with M-series chips (for now).

    • Before serving the newly trained model you must convert it to work with the ilab cli. The ilab convert command converts the new model into quantized GGUF format which is required by the server to host the model in the ilab serve command.

  2. Convert the newly trained model by running the following command:

    ilab convert
  3. Serve the newly trained model locally via ilab serve command with the --model-path argument to specify your new model:

    ilab serve --model-path <New model name>

    Which model should you select to serve? After running the ilab convert command, a few files and directories are generated. The one you will want to serve will end in .gguf and will exist in a directory with the suffix fused-pt. For example: instructlab-merlinite-7b-lab-mlx-q-fused-pt/ggml-model-Q4_K_M.gguf

๐Ÿ“ฃ Chat with the new model (not optional this time)

  • Try the fine-tuned model out live using the chat interface, and see if the results are better than the untrained version of the model with chat by running the following command:

    ilab chat -m <New model name>

    If you are interested in optimizing the quality of the model's responses, please see TROUBLESHOOTING.md

๐ŸŽ Submit your new knowledge or skills

Of course, the final step is, if you've improved the model, to open a pull-request in the taxonomy repository that includes the files (e.g. qna.yaml) with your improved data.

๐Ÿ“ฌ Contributing

Check out our contributing guide to learn how to contribute.