Update the `GGUF usage with llama.cpp` doc page
julien-c opened this issue · 6 comments
julien-c commented
there's now way cleaner syntax than https://huggingface.co/docs/hub/en/gguf-llamacpp to load a model from HF in llama.cpp
julien-c commented
hmm no only partly i think
https://huggingface.co/docs/hub/en/gguf-llamacpp is still sub-optimal imo.
cc @Vaibhavs10, and also cc @ngxson who opened huggingface/huggingface.js#778 which is the same command(s) but "in-product" (on the Hub) vs. in a doc
Vaibhavs10 commented
Good catch, will open a PR today to update and ask for everyone's reviews! 👍
Vaibhavs10 commented