microsoft/generative-ai-for-beginners

Talk about prompt injection and LLM security

simonw opened this issue · 5 comments

This tutorial doesn't yet talk about the security implications of building software on top of LLMs - in particular the prompt injection class of vulnerabilities.

I think this is a problem. Prompt injection is a particularly nasty vulnerability, because if people don't understand it they are almost doomed to build systems that are vulnerable to it.

It also means that a lot of the obvious applications of generative AI are not safe to build. A personal assistant that can summarize and reply to your email for example - that's not safe, because there might be a prompt injection attack in one of the emails that it reads.

I wrote more about this here: https://simonwillison.net/2023/Apr/14/worst-that-can-happen/ (and in this series of posts).

👋 Thanks for contributing @simonw! We will review the issue and get back to you soon.

Hey @simonw , great callout! We are working on an additional 4 lessons which includes one one prompt injection / security. Will make sure to point to some of the great content you have given to this community. Keeping this open until we deliver on this.

Great to hear!

我就认可

This issue has not seen any action for a while! Closing for now, but it can be reopened at a later date.