WithSecure Labs |
Benjamin Hull, Donato Capitella |
08-Apr-24 |
Domain-specific prompt injection detection with BERT classifier |
WithSecure Labs |
Donato Capitella |
21-Feb-24 |
Should you let ChatGPT control your browser? / YouTube Video |
Prompt Injection Explanation with video examples |
Arnav Bathla |
12-Dec-23 |
Prompt Injection Explanation with video examples |
WithSecure Labs |
Donato Capitella |
04-Dec-23 |
A Case Study in Prompt Injection for ReAct LLM Agents/ YouTube Video |
Cyber Security Against AI Wiki |
Aditya Rana |
04-Dec-23 |
Cyber Security AI Wiki |
iFood Cybersec Team |
Emanuel Valente |
04-Sep-23 |
Prompt Injection: Exploring, Preventing & Identifying Langchain Vulnerabilities |
PDF |
Sandy Dunn |
15-Oct-23 |
AI Threat Mind Map |
Medium |
Ken Huang |
11-Jun-23 |
LLM-Powered Applications’ Architecture Patterns and Security Controls |
Medium |
Avinash Sinha |
02-Feb-23 |
AI-ChatGPT-Decision Making Ability- An Over Friendly Conversation with ChatGPT |
Medium |
Avinash Sinha |
06-Feb-23 |
AI-ChatGPT-Decision Making Ability- Hacking the Psychology of ChatGPT- ChatGPT Vs Siri |
Wired |
Matt Burgess |
13-Apr-23 |
The Hacking of ChatGPT Is Just Getting Started |
The Math Company |
Arjun Menon |
23-Jan-23 |
Data Poisoning and Its Impact on the AI Ecosystem |
IEEE Spectrum |
Payal Dhar |
24-Mar-23 |
Protecting AI Models from “Data Poisoning” |
AMB Crypto |
Suzuki Shillsalot |
30-Apr-23 |
Here’s how anyone can Jailbreak ChatGPT with these top 4 methods |
Techopedia |
Kaushik Pal |
22-Apr-23 |
What is Jailbreaking in AI models like ChatGPT? |
The Register |
Thomas Claburn |
26-Apr-23 |
How prompt injection attacks hijack today's top-end AI – and it's tough to fix |
Itemis |
Rafael Tappe Maestro |
14-Feb-23 |
The Rise of Large Language Models ~ Part 2: Model Attacks, Exploits, and Vulnerabilities |
Hidden Layer |
Eoin Wickens, Marta Janus |
23-Mar-23 |
The Dark Side of Large Language Models: Part 1 |
Hidden Layer |
Eoin Wickens, Marta Janus |
24-Mar-23 |
The Dark Side of Large Language Models: Part 2 |
Embrace the Red |
Johann Rehberger (wunderwuzzi) |
29-Mar-23 |
AI Injections: Direct and Indirect Prompt Injections and Their Implications |
Embrace the Red |
Johann Rehberger (wunderwuzzi) |
15-Apr-23 |
Don't blindly trust LLM responses. Threats to chatbots |
MufeedDVH |
Mufeed |
9-Dec-22 |
Security in the age of LLMs |
danielmiessler.com |
Daniel Miessler |
15-May-23 |
The AI Attack Surface Map v1.0 |
Dark Reading |
Gary McGraw |
20-Apr-23 |
Expert Insight: Dangers of Using Large Language Models Before They Are Baked |
Honeycomb.io |
Phillip Carter |
25-May-23 |
All the Hard Stuff Nobody Talks About when Building Products with LLMs |
Wired |
Matt Burgess |
25-May-23 |
The Security Hole at the Heart of ChatGPT and Bing |
BizPacReview |
Terresa Monroe-Hamilton |
30-May-23 |
‘I was unaware’: NY attorney faces sanctions after using ChatGPT to write brief filled with ‘bogus’ citations |
Washington Post |
Pranshu Verma |
18-May-23 |
A professor accused his class of using ChatGPT, putting diplomas in jeopardy |
Kudelski Security Research |
Nathan Hamiel |
25-May-23 |
Reducing The Impact of Prompt Injection Attacks Through Design |
AI Village |
GTKlondike |
7-June-23 |
Threat Modeling LLM Applications |
Embrace the Red |
Johann Rehberger |
28-Mar-23 |
ChatGPT Plugin Exploit Explained |
NVIDIA Developer |
Will Pearce, Joseph Lucas |
14-Jun-23 |
NVIDIA AI Red Team: An Introduction |
Kanaries |
Naomi Clarkson |
7-Apr-23 |
Google Bard Jailbreak |