krzz2q's Stars
hiyouga/LLaMA-Factory
A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
dair-ai/ml-visuals
🎨 ML Visuals contains figures and templates which you can reuse and customize to improve your scientific writing.
meta-llama/llama-recipes
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.
LouisShark/chatgpt_system_prompt
A collection of GPT system prompts and various prompt injection/leaking knowledge.
Instruction-Tuning-with-GPT-4/GPT-4-LLM
Instruction Tuning with GPT-4
guanyingc/latex_paper_writing_tips
Tips for Writing a Research Paper using LaTeX
MLGroupJLU/LLM-eval-survey
The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
Libr-AI/OpenFactVerification
Loki: Open-source solution designed to automate the process of verifying factuality
HillZhang1999/llm-hallucination-survey
Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"
Kuingsmile/clash-core
backup of clash core
ThuCCSLab/Awesome-LM-SSP
A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
RUCAIBox/LLMBox
A comprehensive library for implementing LLMs, including a unified training pipeline and comprehensive model evaluation.
reasoning-survey/Awesome-Reasoning-Foundation-Models
✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models
HowieHwong/TrustLLM
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
teacherpeterpan/self-correction-llm-papers
This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.
freshllms/freshqa
Data and code for FreshLLMs (https://arxiv.org/abs/2310.03214)
FreedomIntelligence/InstructionZoo
drmuskangarg/Multimodal-datasets
This repository is build in association with our position paper on "Multimodality for NLP-Centered Applications: Resources, Advances and Frontiers". As a part of this release we share the information about recent multimodal datasets which are available for research purposes. We found that although 100+ multimodal language resources are available in literature for various NLP tasks, still publicly available multimodal datasets are under-explored for its re-usage in subsequent problem domains.
IAAR-Shanghai/UHGEval
[ACL 2024] Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
mutonix/RefGPT
zjunlp/FactCHD
[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
ICTMCG/LLM-for-misinformation-research
Paper list of misinformation research using (multi-modal) large language models, i.e., (M)LLMs.
LoryPack/LLM-LieDetector
Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"
TalSchuster/FeverSymmetric
Symmetric evaluation set based on the FEVER (fact verification) dataset
yuxiaw/OpenFactCheck
open-compass/ANAH
[ACL 2024] ANAH: Analytical Annotation of Hallucinations in Large Language Models
AdrianBZG/SFAVEL
Code for "Unsupervised Pretraining for Fact Verification by Language Model Distillation" (ICLR 2024)
ict-bigdatalab/FER
jadeCurl/FFRR
Official implementation of paper "Reinforcement Retrieval Leveraging Fine-grained Feedback for Fact Checking News Claims with Black-Box LLM"
znhy1024/JustiLM