Here is the updated README.md file with the new script name llm.sh and instructions to download the script from the provided URL and execute it locally:

# llm.sh

`llm.sh` is a bash script to automate the initialization and setup of various components required for running a machine learning model. This includes initializing GPU, XPU, or CPU, setting up a Python virtual environment, installing necessary scripts, and starting a UI server.

## Prerequisites

- Bash
- Python 3
- Node.js and npm
- NVIDIA or Intel drivers (if applicable)

## Functions

### gpuinit
Initializes the GPU by running the `install-nvidia.sh` script.
```bash
gpuinit() {
    ./scripts/install-nvidia.sh
}

xpuinit

Initializes the XPU by running the install-intel.sh script.

xpuinit() {
    ./scripts/install-intel.sh
}

cpuinit

Initializes the CPU by running the install-cpu.sh script.

cpuinit() {
    ./scripts/install-cpu.sh
}

llamacppinit

Installs llamacpp from source by running the install-llamacpp.sh script.

llamacppinit() {
    ./scripts/install-llamacpp.sh "$@"
}

uiinit

Initializes the UI by running the install-ui.sh script.

uiinit() {
    ./scripts/install-ui.sh
}

start_ui_server

Starts the UI server by navigating to the nextjs-vllm-ui submodule and running npm start.

start_ui_server() {
    cd ./submodules/nextjs-vllm-ui
    npm run start
}

detect_system_type

Detects the system type (GPU, XPU, or CPU) and initializes the appropriate components.

detect_system_type() {
    if command -v nvidia-smi &> /dev/null; then
        echo "GPU detected"
        gpuinit
    elif command -v xpuinfo &> /dev/null; then
        echo "XPU detected"
        xpuinit
    else
        echo "CPU detected"
        cpuinit
    fi
}

setup_venv

Sets up and activates a Python virtual environment in ~/llmpro.

setup_venv() {
    if [ ! -d "$VENV_DIR" ]; then
        python3 -m venv "$VENV_DIR"
        echo "Virtual environment created at $VENV_DIR"
    fi
    source "$VENV_DIR/bin/activate"
    python3 -m pip install --upgrade pip
}

initialize_config

Reads or initializes the configuration by setting up the virtual environment, detecting the system type, and initializing required components.

initialize_config() {
    CONFIG_FILE=llmconfig
    if [ ! -f "$CONFIG_FILE" ]; then
        setup_venv
        detect_system_type
        llamacppinit
        uiinit
        touch "$CONFIG_FILE"
        echo "envs_initialized=true" > "$CONFIG_FILE"
    else
        source "$VENV_DIR/bin/activate"
        source "$CONFIG_FILE"
    fi
}

Usage

To download and execute the script locally, follow these steps:

  1. Download the script using curl:
curl -O https://raw.githubusercontent.com/techcto/llm-pro-devops/main/llm.sh
  1. Make the script executable:
chmod +x llm.sh
  1. Run the script with the appropriate options:
  • -s to start the UI server
  • -m <model> to specify the model

Example:

./llm.sh -s

License

This project is licensed under the MIT License - see the LICENSE file for details.