huggingface/hub-docs

Update the `GGUF usage with llama.cpp` doc page

julien-c opened this issue · 6 comments

there's now way cleaner syntax than https://huggingface.co/docs/hub/en/gguf-llamacpp to load a model from HF in llama.cpp

This is resolved by #1280 right?

hmm no only partly i think

https://huggingface.co/docs/hub/en/gguf-llamacpp is still sub-optimal imo.

cc @Vaibhavs10, and also cc @ngxson who opened huggingface/huggingface.js#778 which is the same command(s) but "in-product" (on the Hub) vs. in a doc

Good catch, will open a PR today to update and ask for everyone's reviews! 👍

Opened a PR to discuss: #1326

cc: @ngxson - I wasn't able to add you as a reviewer, but please feel free to review