Trusted-AI/adversarial-robustness-toolbox

Implementation of ObjectSeeker Certifiably Robust Defense

f4str opened this issue · 0 comments

f4str commented

Is your feature request related to a problem? Please describe.
ObjectSeeker is a certifiably robust defense against patch attacks on object detection models. The defense itself was originally intended for evasion patches, but may be generalizable for poisoning since it is patch agnostic. Therefore, this defense may be a good baseline against the BadDet poisoning attack on object detection models.

Paper link: https://arxiv.org/abs/2202.01811

Describe the solution you'd like
Since this is a certifiably robust defense, this should be implemented under art.estimators.certification. Just like the randomized_smoothing submodule, an object_seeker submodule will be created which will have the PyTorch implementation for now (TensorFlow may be added later).

The ObjectSeekerPyTorch class will take in an object detection model (Faster R-CNN or YOLO) and implement the corresponding fit, predict, and certify methods that all ART certification estimators typically do.

Describe alternatives you've considered
This may also be implemented somewhere under art.defenses since this is a defense, but it makes more sense under art.estimators.certification since it is a certifiably robust defense.

Additional context
A PyTorch implementation will only be done for now since object detection models (Faster R-CNN and YOLO) are typically only used in PyTorch. A TensorFlow implementation may be done in the future, but this is low priority.