Pinned Repositories
ai-evaluation-manifesto
create-symbolic-link
Create symbolic link
hedgehog-porcupine-softmax-mimicry
Code implementation in PyTorch of the paper "The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry" at https://arxiv.org/abs/2402.04347
JiHa-Kim
Config files for my GitHub profile.
LLMLingua-implementation
I'm trying to get LLMLingua to work.
LLMs-self-prompt-creation
In this repository, I will keep track on an experiment where I test LLMs' abilities to craft and self-improve system prompts, and see if it leads to better results.
neural-network-from-scratch
quantize-hf-models
Quantize LLMs from HuggingFace into the GGUF format (both standard and with imatrix) using a Colab notebook.
text-generation-webui-fork
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
update-llama.cpp-windows
I created a script to update llama.cpp on Windows with cmake because doing it manually is tedious.
JiHa-Kim's Repositories
JiHa-Kim/create-symbolic-link
Create symbolic link
JiHa-Kim/ai-evaluation-manifesto
JiHa-Kim/hedgehog-porcupine-softmax-mimicry
Code implementation in PyTorch of the paper "The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry" at https://arxiv.org/abs/2402.04347
JiHa-Kim/JiHa-Kim
Config files for my GitHub profile.
JiHa-Kim/LLMLingua-implementation
I'm trying to get LLMLingua to work.
JiHa-Kim/LLMs-self-prompt-creation
In this repository, I will keep track on an experiment where I test LLMs' abilities to craft and self-improve system prompts, and see if it leads to better results.
JiHa-Kim/neural-network-from-scratch
JiHa-Kim/quantize-hf-models
Quantize LLMs from HuggingFace into the GGUF format (both standard and with imatrix) using a Colab notebook.
JiHa-Kim/text-generation-webui-fork
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
JiHa-Kim/update-llama.cpp-windows
I created a script to update llama.cpp on Windows with cmake because doing it manually is tedious.