/AutoGGUF

automatically quant GGUF models

Primary LanguagePythonApache License 2.0Apache-2.0

AutoGGUF-banner

AutoGGUF - automated GGUF model quantizer

GitHub release GitHub last commit CI/CD Status

Powered by llama.cpp GitHub top language Platform Compatibility GitHub license

GitHub stars GitHub forks GitHub release (latest by date) GitHub repo size Lines of Code

Code Style: Black Issues PRs Welcome

AutoGGUF provides a graphical user interface for quantizing GGUF models using the llama.cpp library. It allows users to download different versions of llama.cpp, manage multiple backends, and perform quantization tasks with various options.

Features

  • Download and manage llama.cpp backends
  • Select and quantize GGUF models
  • Configure quantization parameters
  • Monitor system resources during quantization
  • Parallel quantization + imatrix generation
  • LoRA conversion and merging
  • Preset saving and loading
  • AutoFP8 quantization
  • GGUF splitting

Usage

Cross-platform

  1. Install dependencies:
    pip install -r requirements.txt
    
  2. Run the application:
    python src/main.py
    
    or use the run.bat script.

macOS and Ubuntu builds are provided with GitHub Actions, you may download the binaries in the releases section.

Windows

Standard builds:

  1. Download the latest release
  2. Extract all files to a folder
  3. Run AutoGGUF-x64.exe

Setup builds:

  1. Download setup varient of latest release
  2. Extract all files to a folder
  3. Run the setup program
  4. The .GGUF extension will be registered with the program automatically
  5. Run the program from the Start Menu or desktop shortcuts

After launching the program, you may access its local server at port 7001 (set AUTOGGUF_SERVER to "enabled" first)

Verifying Releases

Linux/macOS:

gpg --import AutoGGUF-v1.5.0-prerel.asc
gpg --verify AutoGGUF-v1.5.0-Windows-avx2-prerel.zip.sig AutoGGUF-v1.5.0-Windows-avx2-prerel.zip
sha256sum -c AutoGGUF-v1.5.0-prerel.sha256

Windows (PowerShell):

# Import the public key
gpg --import AutoGGUF-v1.5.0-prerel.asc

# Verify the signature
gpg --verify AutoGGUF-v1.8.1-Windows-avx2.zip.sig AutoGGUF-v1.8.1-Windows-avx2.zip

# Check SHA256
$fileHash = (Get-FileHash -Algorithm SHA256 AutoGGUF-v1.8.1-Windows-avx2.zip).Hash.ToLower()
$storedHash = (Get-Content AutoGGUF-v1.8.1.sha256 | Select-String AutoGGUF-v1.8.1-Windows-avx2.zip).Line.Split()[0]
if ($fileHash -eq $storedHash) { "SHA256 Match" } else { "SHA256 Mismatch" }

Release keys are identical to ones used for commiting.

Building

Cross-platform

pip install -U pyinstaller
./build.sh RELEASE | DEV
cd build/<type>/dist/
./AutoGGUF

Windows

build RELEASE | DEV

Find the executable in build/<type>/dist/AutoGGUF.exe.

You can also use the slower build but faster executable method (Nuitka):

build_optimized RELEASE | DEV

Dependencies

Find them in requirements.txt.

Localizations

View the list of supported languages at AutoGGUF/wiki/Installation#configuration (LLM translated, except for English).

To use a specific language, set the AUTOGGUF_LANGUAGE environment variable to one of the listed language codes (note: some languages may not be fully supported yet, those will fall back to English).

Issues

  • Some inconsistent logging

Planned Features

  • Time estimation for quantization
  • Quantization file size estimate
  • Perplexity testing
  • HuggingFace upload/download (coming in the next release)
  • bitsandbytes (coming soon)

Troubleshooting

  • SSL module cannot be found error: Install OpenSSL or run from source using python src/main.py with the run.bat script (pip install requests)

Contributing

Fork the repo, make your changes, and ensure you have the latest commits when merging. Include a changelog of new features in your pull request description. Read CONTRIBUTING.md for more information.

User Interface

AutoGGUF-v1 8 1-showcase-blue

Stargazers

Star History Chart