LLM-security
该资源主要总结和积累大模型(LLM)及ChatGPT相关的安全学术论文或技术博客,希望对自己和大家有所帮助!欢迎大家一起补充,加油。
目录:
- Keywords
- Paper
- Code
- Blog
- Zhihu-Topic
- 作者博客
Keywords
安全相关术语:
- Prompt Injection
- Insecure Output Handling
- Training Data Poisoning
- Model Denial of Service
- Supply Chain Vulnerabilities
- Sensitive Information Disclosure
- Insecure Plugin Design
- Excessive Agency
- Overreliance
- Model Theft
LLM术语:
- ChatGPT
- Prompt Learning
- AIGC
对抗样本:
- 与LLM安全的关联及区别
AI安全:
- AI for security
- security of AI
- 模型安全和数据安全
Paper
LLM:
- Universal and Transferable Adversarial Attacks on Aligned Language Models
- A Survey of Large Language Models
- Jailbreaker: Automated Jailbreak Across Multiple Large Language Model Chatbots
- Prompt Injection attack against LLM-integrated Applications
- Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study
AI for Security:
Code
- https://github.com/llm-attacks/llm-attacks
- https://github.com/LLMSecurity/HouYi
- https://github.com/RUCAIBox/LLMSurvey
- https://github.com/RUCAIBox/LLMSurvey/blob/main/assets/LLM_Survey_Chinese.pdf
Blog
LLM安全相关:
其它问题:
Zhihu-Topic
LLM安全相关:
其它问题:
作者博客
BY: Eastmount 2023-10-25