Must-read papers on knowledge editing for large language models.
-
New Reports
Report Topic PPT Resource COLING2024 tutorial Knowledge Editing for Large Language Models Google Drive VALSE2024 tutorial Knowledge Mechanism and Editing for Large Language Models Google Drive AAAI2024 tutorial Knowledge Editing for Large Language Models Google Drive
- 2024-01-03 We release a new paper:"A Comprehensive Study of Knowledge Editing for Large Language Models" with a new benchmark KnowEdit! We are looking forward to any comments or discussions on this topic :)
- 2024-12-09 Our paper "Editing Language Model-based Knowledge Graph Embeddings?" has been accepted by AAAI 2024.
- 2023-11-18 We will provide a tutorial on Knowledge Editing for Large Language Models at COLING 2024.
- 2023-10-25 We will provide a tutorial on Knowledge Editing for Large Language Models at AAAI 2024.
- 2023-10-22 Our paper "Can We Edit Multimodal Large Language Models?" has been accepted by EMNLP 2023.
- 2023-10-08 Our paper "Editing Large Language Models: Problems, Methods, and Opportunities" has been accepted by EMNLP 2023.
- 2023-8-15 We release the paper "EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models."
- 2023-07 We release EasyEdit, an easy-to-use knowledge editing framework for LLMs.
- 2023-06 We will provide a tutorial on Editing Large Language Models at AACL 2023.
- 2023-05 We release a new analysis paper:"Editing Large Language Models: Problems, Methods, and Opportunities" based on this repository! We are looking forward to any comments or discussions on this topic :)
- 2022-12 We create this repository to maintain a paper list on Knowledge Editing.
- 🌟 Why Knowledge Editing?
- Keywords
- Comparisons of the different technologies
- 📜 Papers
- 🧰 Resources
- 🎉 Contribution
- 🚩Citation
Knowledge Editing is a compelling field of research that focuses on facilitating efficient modifications to the behavior of models, particularly foundation models. The aim is to implement these changes within a specified scope of interest without negatively affecting the model's performance across a broader range of inputs.
Knowledge Editing has strong connections with following topics.
- Updating and fixing bugs for large language models
- Language models as knowledge base, locating knowledge in large language models
- Lifelong learning, unlearning and etc.
- Security and privacy for large language models
This is a collection of research and review papers of Knowledge Editing. Any suggestions and pull requests are welcome for better sharing of latest research progress.
Knowledge Editing for Large Language Models, AAAI 2024 Tutorial
Ningyu Zhang, Jia-Chen Gu, Yunzhi Yao, Zhen Bi, Shumin Deng. [Github] [Google Drive] [Baidu Pan]
Editing Large Language Models, AACL 2023 Tutorial
Ningyu Zhang, Yunzhi Yao, Shumin Deng. [Github] [Google Drive] [Baidu Pan]
A Comprehensive Study of Knowledge Editing for Large Language Models
Ningyu Zhang, Yunzhi Yao, Bozhong Tian, Peng Wang, Shumin Deng, Mengru Wang, Zekun Xi, Shengyu Mao, Jintian Zhang, Yuansheng Ni, Siyuan Cheng, Ziwen Xu, Xin Xu, Jia-Chen Gu, Yong Jiang, Pengjun Xie, Fei Huang, Lei Liang, Zhiqiang Zhang, Xiaowei Zhu, Jun Zhou, Huajun Chen.
[paper][benchmark][code]
Editing Large Language Models: Problems, Methods, and Opportunities, EMNLP 2023 Main Conference Paper
Yunzhi Yao, Peng Wang, Bozhong Tian, Siyuan Cheng, Zhoubo Li, Shumin Deng, Huajun Chen, Ningyu Zhang. [paper][code]
Knowledge Editing for Large Language Models: A Survey
Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, Jundong Li. [paper]
A Survey on Knowledge Editing of Neural Networks
Vittorio Mazzia, Alessandro Pedrani, Andrea Caciolai, Kay Rottmann, Davide Bernardi. [paper]
Knowledge Unlearning for LLMs: Tasks, Methods, and Challenges
Nianwen Si, Hao Zhang, Heyu Chang, Wenlin Zhang, Dan Qu, Weiqiang Zhang. [paper]
-
Memory-Based Model Editing at Scale (ICML 2022)
Eric Mitchell, Charles Lin, Antoine Bosselut, Christopher D. Manning, Chelsea Finn. [paper] [code] [demo] -
Fixing Model Bugs with Natural Language Patches. (EMNLP 2022)
Shikhar Murty, Christopher D. Manning, Scott M. Lundberg, Marco Túlio Ribeiro. [paper] [code] -
MemPrompt: Memory-assisted Prompt Editing with User Feedback. (EMNLP 2022)
Aman Madaan, Niket Tandon, Peter Clark, Yiming Yang. [paper] [code] [page] [video] -
Large Language Models with Controllable Working Memory.
Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, Sanjiv Kumar. [paper] -
Can We Edit Factual Knowledge by In-Context Learning?
Ce Zheng, Lei Li, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, Baobao Chang. [paper] -
Can LMs Learn New Entities from Descriptions? Challenges in Propagating Injected Knowledge
Yasumasa Onoe, Michael J.Q. Zhang, Shankar Padmanabhan, Greg Durrett, Eunsol Choi. [paper] -
MQUAKE: Assessing Knowledge Editing inLanguage Models via Multi-Hop Questions
Zexuan Zhong, Zhengxuan Wu, Christopher D. Manning, Christopher Potts, Danqi Chen.
[paper] [code] -
PokeMQA: Programmable knowledge editing for Multi-hop Question Answering
Hengrui Gu, Kaixiong Zhou, Xiaotian Han, Ninghao Liu, Ruobing Wang, Xin Wang.
[paper] [code] -
Retrieval-augmented Multilingual Knowledge Editing
Weixuan Wang, Barry Haddow, Alexandra Birch. [paper] [code] -
MEMORYLLM: Towards Self-Updatable Large Language Models
Yu Wang, Xiusi Chen, Jingbo Shang, Julian McAuley. [paper] -
DeepEdit: Knowledge Editing as Decoding with Constraints
Yiwei Wang,Muhao Chen,Nanyun Peng, Kai-Wei Chang. [paper] -
Stable Knowledge Editing in Large Language Models.
Zihao Wei,Liang Pang,Hanxing Ding,Jingcheng Deng,Huawei Shen,Xueqi Cheng. [paper] -
Knowledge Editing on Black-box Large Language Models.
Xiaoshuai Song, Zhengyang Wang, Keqing He, Guanting Dong, Jinxu Zhao, Weiran Xu. [paper] -
Learning to Edit: Aligning LLMs with Knowledge Editing.
Yuxin Jiang, Yufei Wang, Chuhan Wu, Wanjun Zhong, Xingshan Zeng, Jiahui Gao, Liangyou Li, Xin Jiang, Lifeng Shang, Ruiming Tang, Qun Liu, Wei Wang. [paper] -
Robust and Scalable Model Editing for Large Language Models.
Yingfa Chen, Zhengyan Zhang, Xu Han, Chaojun Xiao, Zhiyuan Liu, Chen Chen, Kuai Li, Tao Yang, Maosong Sun. [paper] -
Retrieval-Enhanced Knowledge Editing for Multi-Hop Question Answering in Language Models.
Yucheng Shi, Qiaoyu Tan, Xuansheng Wu, Shaochen Zhong, Kaixiong Zhou, Ninghao Liu. [paper]
-
Calibrating Factual Knowledge in Pretrained Language Models. (EMNLP 2022)
Qingxiu Dong, Damai Dai, Yifan Song, Jingjing Xu, Zhifang Sui, Lei Li. [paper] [code] -
Transformer-Patcher: One Mistake worth One Neuron. (ICLR 2023)
Zeyu Huang, Yikang Shen, Xiaofeng Zhang, Jie Zhou, Wenge Rong, Zhang Xiong. [paper] [code] -
Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors. (NeurIPS 2023)
Thomas Hartvigsen, Swami Sankaranarayanan, Hamid Palangi, Yoon Kim, Marzyeh Ghassemi. [paper] [code] -
Neural Knowledge Bank for Pretrained Transformers
Damai Dai, Wenbin Jiang, Qingxiu Dong, Yajuan Lyu, Qiaoqiao She, Zhifang Sui. [paper] -
Rank-One Editing of Encoder-Decoder Models
Vikas Raunak, Arul Menezes. [paper] -
MELO: Enhancing Model Editing with Neuron-Indexed Dynamic LoRA. (AAAI 2024)
Lang Yu, Qin Chen, Jie Zhou, Liang He. [paper] [code] -
MPN: Leveraging Multilingual Patch Neuron for Cross-lingual Model Editing
Nianwen Si, Hao Zhang, Weiqiang Zhang. [paper] -
SWEA: Changing Factual Knowledge in Large Language Models via Subject Word Embedding Altering
Xiaopeng Li, Shasha Li, Bin Ji, Shezheng Song. [paper] -
WilKE: Wise-Layer Knowledge Editor for Lifelong Knowledge Editing
Chenhui Hu,Pengfei Cao,Yubo Chen,Kang Liu,Jun Zhao. [paper]
- Inspecting and Editing Knowledge Representations in Language Models
Evan Hernandez, Belinda Z. Li, Jacob Andreas. [paper] [code]
-
Plug-and-Play Adaptation for Continuously-updated QA. (ACL 2022 Findings)
Kyungjae Lee, Wookje Han, Seung-won Hwang, Hwaran Lee, Joonsuk Park, Sang-Woo Lee. [paper] [code] -
Modifying Memories in Transformer Models.
Chen Zhu, Ankit Singh Rawat, Manzil Zaheer, Srinadh Bhojanapalli, Daliang Li, Felix Yu, Sanjiv Kumar. [paper] -
Forgetting before Learning: Utilizing Parametric Arithmetic for Knowledge Updating in Large Language Models
Shiwen Ni, Dingwei Chen, Chengming Li, Xiping Hu, Ruifeng Xu and Min Yang. [paper]
-
Editing Factual Knowledge in Language Models. (EMNLP 2021)
Nicola De Cao, Wilker Aziz, Ivan Titov. [paper] [code] -
Fast Model Editing at Scale. (ICLR 2022)
Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, Christopher D. Manning. [paper] [code] [page] -
Editable Neural Networks. (ICLR 2020)
Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitry V. Pyrkin, Sergei Popov, Artem Babenko. [paper] [code] -
Editing Language Model-based Knowledge Graph Embeddings (AAAI 2024)
Siyuan Cheng, Ningyu Zhang, Bozhong Tian, Xi Chen, Qingbing Liu, Huajun Chen. [paper] [code] -
Massive Editing for Large Language Model via Meta Learning. (ICLR 2024)
Chenmien Tan1, Ge Zhang, Jie Fu. [paper] [code]
-
Editing a classifier by rewriting its prediction rules. (NeurIPS 2021)
Shibani Santurkar, Dimitris Tsipras, Mahalaxmi Elango, David Bau, Antonio Torralba, Aleksander Madry. [paper] [code] -
Language Anisotropic Cross-Lingual Model Editing.
Yang Xu, Yutai Hou, Wanxiang Che. [paper] -
Repairing Neural Networks by Leaving the Right Past Behind.
Ryutaro Tanno, Melanie F. Pradier, Aditya Nori, Yingzhen Li. [paper] -
Locating and Editing Factual Associations in GPT. (NeurIPS 2022)
Kevin Meng, David Bau, Alex Andonian, Yonatan Belinkov. [paper] [code] [page] [video] -
Mass-Editing Memory in a Transformer. (ICLR 2023)
Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, David Bau. [paper] [code] [page] [demo] -
Editing models with task arithmetic. (ICLR 2023)
Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Ludwig Schmidt, Hannaneh Hajishirzi, Ali Farhadi. [paper] -
Editing Common Sense in Transformers. (EMNLP 2023)
Anshita Gupta, Debanjan Mondal, Akshay Krishna Sheshadri, Wenlong Zhao, Xiang Lorraine Li, Sarah Wiegreffe, Niket Tandon. [paper] -
Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs. (EACL 2023)
Peter Hase, Mona Diab, Asli Celikyilmaz, Xian Li, Zornitsa Kozareva, Veselin Stoyanov, Mohit Bansal, Srinivasan Iyer. [paper] [code] -
Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark. (ACL 2023 Findings)
Jason Hoelscher-Obermaier, Julia Persson, Esben Kran, Ioannis Konstas, Fazl Barez. [paper] -
Knowledge Neurons in Pretrained Transformers.(ACL 2022)
Damai Dai , Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, Furu Wei.[paper] [code] [code by EleutherAI] -
LEACE: Perfect linear concept erasure in closed form .
Nora Belrose, David Schneider-Joseph, Shauli Ravfogel, Ryan Cotterell, Edward Raff, Stella Biderman. [paper] -
Transformer Feed-Forward Layers Are Key-Value Memories. (EMNLP 2021)
Mor Geva, Roei Schuster, Jonathan Berant, Omer Levy. [paper] -
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space.(EMNLP 2022)
Mor Geva, Avi Caciularu, Kevin Ro Wang, Yoav Goldberg. [paper] -
PMET: Precise Model Editing in a Transformer. (AAAI 2024)
Xiaopeng Li, Shasha Li, Shezheng Song, Jing Yang, Jun Ma, Jie Yu. [paper] [code] -
Unlearning Bias in Language Models by Partitioning Gradients. (ACL 2023 Findings)
Charles Yu, Sullam Jeoung, Anish Kasi, Pengfei Yu, Heng Ji. [paper] [code] -
DEPN: Detecting and Editing Privacy Neurons in Pretrained Language Models (EMNLP 2023)
Xinwei Wu, Junzhuo Li, Minghui Xu, Weilong Dong, Shuangzhi Wu, Chao Bian, Deyi Xiong. [paper] -
Untying the Reversal Curse via Bidirectional Language Model Editing
Jun-Yu Ma, Jia-Chen Gu, Zhen-Hua Ling, Quan Liu, Cong Liu. [paper] -
Trace and Edit Relation Associations in GPT
Jiahang Li,Taoyu Chen,Yuanli Wang. [paper] -
Consecutive Model Editing with Batch alongside HooK Layers
Shuaiyi Li,Yang Deng,Deng Cai,Hongyuan Lu,Liang Chen,Wai Lam. [paper] -
A Unified Framework for Model Editing
Akshat Gupta,Dev Sajnani,Gopala Anumanchipalli. [paper] -
Detoxifying Large Language Models via Knowledge Editing
Mengru Wang, Ningyu Zhang, Ziwen Xu, Zekun Xi, Shumin Deng, Yunzhi Yao, Qishen Zhang, Linyi Yang, Jindong Wang, Huajun Chen. [paper] -
Locating and Editing Factual Associations in Mamba
Arnab Sen Sharma,David Atkinson,David Bau. [paper] -
Large Language Model Bias Mitigation from the Perspective of Knowledge Editing
Ruizhe Chen, Yichen Li, Zikai Xiao, Zuozhu Liu. [paper]
-
FRUIT: Faithfully Reflecting Updated Information in Text. (NAACL 2022)
Robert L. Logan IV, Alexandre Passos, Sameer Singh, Ming-Wei Chang. [paper] [code] -
Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning. (EMNLP 2022)
Oyvind Tafjord, Bhavana Dalvi Mishra, Peter Clark. [paper] [code] [video] -
Towards Tracing Factual Knowledge in Language Models Back to the Training Data.
Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, Kelvin Guu. (EMNLP 2022) [paper] -
Prompting GPT-3 To Be Reliable.
Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuohang Wang, Jianfeng Wang, Jordan Boyd-Graber, Lijuan Wang. [paper] -
Patching open-vocabulary models by interpolating weights. (NeurIPS 2022)
Gabriel Ilharco, Mitchell Wortsman, Samir Yitzhak Gadre, Shuran Song, Hannaneh Hajishirzi, Simon Kornblith, Ali Farhadi, Ludwig Schmidt. [paper] [code] -
Decouple knowledge from paramters for plug-and-play language modeling (ACL2023 Findings)
Xin Cheng, Yankai Lin, Xiuying Chen, Dongyan Zhao, Rui Yan.[paper] [code] -
Backpack Language Models
John Hewitt, John Thickstun, Christopher D. Manning, Percy Liang. [paper] -
Learning to Model Editing Processes. (EMNLP 2022)
Machel Reid, Graham Neubig. [paper] -
Trends in Integration of Knowledge and Large Language Models: A Survey and Taxonomy of Methods, Benchmarks, and Applications.
Zhangyin Feng, Weitao Ma, Weijiang Yu, Lei Huang, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting liu. [paper] -
DUnE: Dataset for Unified Editing. (EMNLP 2023)
Afra Feyza Akyürek, Eric Pan, Garry Kuwanto, Derry Wijaya. [paper] -
See the Unseen: Better Context-Consistent Knowledge-Editing by Noises.
Youcheng Huang, Wenqiang Lei, Zheng Zhang, Jiancheng Lv, Shuicheng Yan. [paper] -
Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language Models.
Rima Hazra, Sayan Layek, Somnath Banerjee, Soujanya Poria. [paper] -
Model Editing with Canonical Examples.
John Hewitt, Sarah Chen, Lanruo Lora Xie. [paper] -
EVEDIT: Event-based Knowledge Editing with Deductive Editing Boundaries.
Jiateng Liu,Pengfei Yu,Yuji Zhang,Sha Li,Zixuan Zhang,Heng Ji. [paper] -
Investigating Multi-Hop Factual Shortcuts in Knowledge Editing of Large Language Models.
Tianjie Ju,Yijin Chen,Xinwei Yuan,Zhuosheng Zhang,Wei Du,Yubin Zheng,Gongshen Liu. [paper] -
Knowledge Graph Enhanced Large Language Model Editing.
Mengqi Zhang,Xiaotian Ye,Qiang Liu,Pengjie Ren,Shu Wu,Zhumin Chen. [paper] -
Editing Factual Knowledge and Explanatory Ability of Medical Large Language Models.
Derong Xu, Ziheng Zhang, Zhihong Zhu, Zhenxi Lin. [paper] -
KEBench: A Benchmark on Knowledge Editing for Large Vision-Language Models.
Han Huang, Haitian Zhong, Qiang Liu, Shu Wu, Liang Wang, Tieniu Tan. [paper] -
COLLABEDIT: TOWARDS NON-DESTRUCTIVE COLLABORATIVE KNOWLEDGE EDITING.
Jiamu Zheng, Jinghuai Zhang, Futing Wang, Tianyu Du, Tao Lin. [paper] -
TAXI: Evaluating Categorical Knowledge Editing for Language Models.
Derek Powell, Walter Gerych, Thomas Hartvigsen. [paper]
- Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models.
Peter Hase, Mohit Bansal, Been Kim, Asma Ghandeharioun. [paper] [code] - Dissecting Recall of Factual Associations in Auto-Regressive Language Models
Mor Geva, Jasmijn Bastings, Katja Filippova, Amir Globerson. [paper] - Evaluating the Ripple Effects of Knowledge Editing in Language Models
Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson, Mor Geva. [paper] - Edit at your own risk: evaluating the robustness of edited models to distribution shifts.
Davis Brown, Charles Godfrey, Cody Nizinski, Jonathan Tu, Henry Kvinge. [paper] - Journey to the Center of the Knowledge Neurons: Discoveries of Language-Independent Knowledge Neurons and Degenerate Knowledge Neurons. (AAAI 2024)
Yuheng Chen, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao. [paper] - Linearity of Relation Decoding in Transformer Language Models
Evan Hernandez, Martin Wattenberg, Arnab Sen Sharma, Jacob Andreas, Tal Haklay, Yonatan Belinkov, Kevin Meng, David Bau. [paper] - KLoB: a Benchmark for Assessing Knowledge Locating Methods in Language Models
Yiming Ju, Zheng Zhang. [paper] - Inference-Time Intervention: Eliciting Truthful Answers from a Language Model (NeurIPS 2023)
Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg. [paper] [code] - Emptying the Ocean with a Spoon: Should We Edit Models? (EMNLP 2023 Findings)
Yuval Pinter and Michael Elhadad. [paper] - Unveiling the Pitfalls of Knowledge Editing for Large Language Models
Zhoubo Li, Ningyu Zhang, Yunzhi Yao, Mengru Wang, Xi Chen and Huajun Chen. [paper] - Editing Personality for LLMs
Shengyu Mao, Ningyu Zhang, Xiaohan Wang, Mengru Wang, Yunzhi Yao, Yong Jiang, Pengjun Xie, Fei Huang and Huajun Chen. [paper] - Evaluating Dependencies in Fact Editing for Language Models: Specificity and Implication Awareness(Findings of EMNLP2023)
Zichao Li, Ines Arous, Siva Reddy, Jackie C.K. Cheung [paper] - Finding and Editing Multi-Modal Neurons in Pre-Trained Transformer
Haowen Pan,Yixin Cao,Xiaozhi Wang,Xun Yang. [paper] - Assessing Knowledge Editing in Language Models via Relation Perspective
Yifan Wei,Xiaoyan Yu,Huanhuan Ma,Fangyu Lei,Yixuan Weng,Ran Song,Kang Liu. [paper] - History Matters: Temporal Knowledge Editing in Large Language Model(AAAI 2024)
Xunjian Yin,Jin Jiang,Liming Yang,Xiaojun Wan. [paper] - Cross-Lingual Knowledge Editing in Large Language Models
Jiaan Wang, Yunlong Liang, Zengkui Sun, Yuxuan Cao, Jiarong Xu. [paper] - Large Language Models Relearn Removed Concepts
Michelle Lo, Shay B. Cohen, Fazl Barez [paper] - Model Editing Can Hurt General Abilities of Large Language Models
Jia-Chen Gu, Hao-Xiang Xu, Jun-Yu Ma, Pan Lu, Zhen-Hua Ling, Kai-Wei Chang, Nanyun Peng [paper] - Model Editing at Scale leads to Gradual and Catastrophic Forgetting
Akshat Gupta, Anurag Rao, Gopala Anumanchipalli. [paper] - Propagation and Pitfalls: Reasoning-based Assessment of Knowledge Editing through Counterfactual Tasks
Wenyue Hua, Jiang Guo, Mingwen Dong, Henghui Zhu, Patrick Ng, Zhiguo Wang. [paper] - Long-form evaluation of model editing
Domenic Rosati, Robie Gonzales, Jinkun Chen, Xuemin Yu. [paper] - The Butterfly Effect of Model Editing: Few Edits Can Trigger Large Language Models Collapse
Wanli Yang, Fei Sun, Xinyu Ma, Xun Liu, Dawei Yin, Xueqi Cheng. [paper] - The Da Vinci Code of Large Pre-trained Language Models: Deciphering Degenerate Knowledge Neurons
Yuheng Chen,Pengfei Cao,Yubo Chen,Yining Wang,Shengping Liu,Kang Liu,Jun Zhao. [paper] - Navigating the Dual Facets: A Comprehensive Evaluation of Sequential Memory Editing in Large Language Models
Zihao Lin, Mohammad Beigi, Hongxuan Li, Yufan Zhou, Yuxiang Zhang, Qifan Wang, Wenpeng Yin, Lifu Huang. [paper] - “Flex Tape Can’t Fix That”:Bias and Misinformation in Edited Language Models
Karina Halevy, Anna Sotnikova, Badr AlKhamissi, Syrielle Montariol, Antoine Bosselut. [paper] - The Missing Piece in Model Editing: A Deep Dive into the Hidden Damage Brought By Model Editing
Jianchen Wang,Zhouhong Gu,Zhuozhi Xiong,Hongwei Feng,Yanghua Xiao. [paper] - Beyond Memorization: The Challenge of Random Memory Access in Language Models
Tongyao Zhu,Qian Liu,Liang Pang,Zhengbao Jiang,Min-Yen Kan,Min Lin. [paper] - Interpreting Key Mechanisms of Factual Recall in Transformer-Based Language Models
Ang Lv, Kaiyi Zhang, Yuhan Chen, Yulong Wang, Lifeng Liu, Ji-Rong Wen, Jian Xie, Rui Yan. [paper] - MLaKE: Multilingual Knowledge Editing Benchmark for Large Language Models
Zihao Wei,Jingcheng Deng,Liang Pang,Hanxing Ding,Huawei Shen,Xueqi Cheng. [paper] - Is Your LLM Outdated? Benchmarking LLMs & Alignment Algorithms for Time-Sensitive Knowledge
Seyed Mahed Mousavi, Simone Alghisi, Giuseppe Riccardi. [paper] - Neighboring Perturbations of Knowledge Editing on Large Language Models(ICML 2024)
Jun-Yu Ma, Jia-Chen Gu, Ningyu Zhang, Zhen-Hua Ling. [paper] - Event-level Knowledge Editing
Hao Peng, Xiaozhi Wang, Chunyang Li, Kaisheng Zeng, Jiangshan Duo, Yixin Cao, Lei Hou, Juanzi Li. [paper] - Updating Language Models with Unstructured Facts: Towards Practical Knowledge Editing
Xiaobao Wu, Liangming Pan, William Yang Wang, Anh Tuan Luu. [paper] - Detecting Edited Knowledge in Language Models
Paul Youssef, Zhixue Zhao, Jörg Schlötterer, Christin Seifert. [paper]
Edit Type | Benchmarks & Datasets |
---|---|
Fact Knowledge | ZSRE, ZSRE plus, CounterFact,CounterFact plus, CounterFact+,ECBD, MQUAKE,DepEdit |
Multi-Lingual | Bi-ZsRE,Eva-KELLM, MzsRE |
Sentiment | Convsent |
Bias | Bias in Bios |
Hallucination | WikiBio |
Commonsense | MEMITcsk |
Reasoning | Eva-KELLM |
Privacy Infomation Protect | PrivQA, Knowledge Sanitation,Enron |
Unified Benchmark | DUnE |
Toxic Information | RealToxicityPrompts,Toxicity Unlearning |
MultiModal | MMEdit KEBench |
EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models.
FastEdit: Editing large language models within 10 seconds
Please cite our paper if find our work useful.
@article{zhang2024comprehensive,
title={A Comprehensive Study of Knowledge Editing for Large Language Models},
author={Zhang, Ningyu and Yao, Yunzhi and Tian, Bozhong and Wang, Peng and Deng, Shumin and Wang, Mengru and Xi, Zekun and Mao, Shengyu and Zhang, Jintian and Ni, Yuansheng and others},
journal={arXiv preprint arXiv:2401.01286},
year={2024}
}
@article{DBLP:journals/corr/abs-2305-13172,
author = {Yunzhi Yao and
Peng Wang and
Bozhong Tian and
Siyuan Cheng and
Zhoubo Li and
Shumin Deng and
Huajun Chen and
Ningyu Zhang},
title = {Editing Large Language Models: Problems, Methods, and Opportunities},
journal = {CoRR},
volume = {abs/2305.13172},
year = {2023},
url = {https://doi.org/10.48550/arXiv.2305.13172},
doi = {10.48550/arXiv.2305.13172},
eprinttype = {arXiv},
eprint = {2305.13172},
timestamp = {Tue, 30 May 2023 17:04:46 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2305-13172.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
- There are cases where we miss important works in this field, please contribute to this repo! Thanks for the efforts in advance.
- We would like to express our gratitude to Longhui Yu for the kind reminder about the missing papers.