Can developers a agent can conduct a Black-box adversarial attack ?
This agent using Deep Reinforcement-Learning Agent that can conduct a adversarial attack just using input and output of classification model.
This repo uses Deep Reinforcement-Learning Agent that can conduct a adversarial attack just using input and output of classification model.
Python
Can developers a agent can conduct a Black-box adversarial attack ?
This agent using Deep Reinforcement-Learning Agent that can conduct a adversarial attack just using input and output of classification model.