/AVLLM

Code for "Evaluating the Validity of Word-level Adversarial Attacks with Large Language Models", ACL 2024

Primary LanguagePython

Watchers