Pinned Repositories
Backdoor-Attacks-on-Crowd-Counting
this is for the ACM MM paper---Backdoor Attack on Crowd Counting
JailBreak-Large-Language-Model-With-A-Malicous-System-Role
We present a novel method that can jailbreak large language model with a malicous system role. It releases the potentially unethical or illegal intention of leveraging a large language model, like ChatGPT, to breach the security measures put in place to limit its access and permissions within a controlled environment.
Nathangitlab's Repositories
Nathangitlab/Backdoor-Attacks-on-Crowd-Counting
this is for the ACM MM paper---Backdoor Attack on Crowd Counting
Nathangitlab/JailBreak-Large-Language-Model-With-A-Malicous-System-Role
We present a novel method that can jailbreak large language model with a malicous system role. It releases the potentially unethical or illegal intention of leveraging a large language model, like ChatGPT, to breach the security measures put in place to limit its access and permissions within a controlled environment.