Pinned Repositories
BUAA-DL2021
BUAA-2021深度学习中作业
distilling-step-by-step
lit-llama
SH2
Code for the paper ''SH2: Self-Highlighted Hesitation Helps You Decode More Truthfully''
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
factor
Code and data for the FACTOR paper
distilling-step-by-step
DoLa
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"
key-configuration-of-llms
cam
Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference
0-KaiKai-0's Repositories
0-KaiKai-0/SH2
Code for the paper ''SH2: Self-Highlighted Hesitation Helps You Decode More Truthfully''
0-KaiKai-0/distilling-step-by-step
0-KaiKai-0/lit-llama
0-KaiKai-0/BUAA-DL2021
BUAA-2021深度学习中作业
0-KaiKai-0/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.