AbnerAI
A PhD student at BNU focuses on designing intelligent computing models.
Beijing Normal UniversityBeijing
AbnerAI's Stars
shehper/sparse-dictionary-learning
An Open Source Implementation of Anthropic's Paper: "Towards Monosemanticity: Decomposing Language Models with Dictionary Learning"
GeWu-Lab/MMPareto_ICML2024
The repo for "MMPareto: Boosting Multimodal Learning with Innocent Unimodal Assistance", ICML 2024
xiaoman-zhang/PMC-VQA
PMC-VQA is a large-scale medical visual question-answering dataset, which contains 227k VQA pairs of 149k images that cover various modalities or diseases.
wayveai/LingoQA
[ECCV 2024] Official GitHub repository for the paper "LingoQA: Visual Question Answering for Autonomous Driving"
ChnQ/DEAN
nerfies/nerfies.github.io
Thartvigsen/GRACE
[NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors
wangywUST/DeepEdit
Repository for our paper "DeepEdit: Knowledge Editing as Decoding with Constraints". https://arxiv.org/abs/2401.10471
kmeng01/rome
Locating and editing factual associations in GPT (NeurIPS 2022)
renqibing/ActorAttack
hiyouga/FastEdit
🩹Editing large language models within 10 seconds⚡
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
zjunlp/KnowledgeEditingPapers
Must-read Papers on Knowledge Editing for Large Language Models.
VITA-Group/Junk_DNA_Hypothesis
[ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, Souvik Kundu, Zhangyang Wang
andy1764/CovBat_Harmonization
Correcting Covariance Batch Effects (CovBat): Harmonization of mean and covariance for multi-site data
Jfortin1/ComBatHarmonization
Harmonization of multi-site imaging data with ComBat
rasbt/LLMs-from-scratch
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
lllyasviel/ControlNet
Let us control diffusion models!
lafeat/advdiffuser
AdvDiffuser: Natural Adversarial Example Synthesis with Diffusion Models (ICCV 2023)
eseckel/ai-for-grant-writing
A curated list of resources for using LLMs to develop more competitive grant applications.
renqibing/CodeAttack
[ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion
dinobby/MAgICoRE
PKU-Alignment/omnisafe
JMLR: OmniSafe is an infrastructural framework for accelerating SafeRL research.
chenzomi12/Deep-Reinforcement-Learning
《深度强化学习:原理与实践》,Code of the book <Deep Reinforcement Learning: Principles and Practices>
maidacundo/MoE-LoRA
Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.
Xiang-Li-oss/MoDE-CoTD
S-LoRA/S-LoRA
S-LoRA: Serving Thousands of Concurrent LoRA Adapters
wutaiqiang/MoSLoRA
GCYZSL/MoLA
thu-ml/tianshou
An elegant PyTorch deep reinforcement learning library.