This is a PyTorch implementation of the paper "Reinforcement Learning-Based Black-Box Model Inversion Attacks" accepted by CVPR 2023.
This code has been tested with Python 3.8.8, PyTorch 1.8.0 and cuda 10.2.89.
Model weights for experiments can be downloaded from the link below. https://drive.google.com/drive/folders/15Xcqoz53TQVUUyZe9HNtchoCriLUeQ-O?usp=sharing
Please check the commands included in run_experiments.sh
.
There are commands for both the simplified experiment and the experiments reported in the paper.
Please run
bash run_experiments.sh
to reproduce the results.
This repository contains code snippets and some model weights from repositories mentioned below.
https://github.com/MKariya1998/GMI-Attack