Adversarial_EKG_Classifier

Code Repository for the following project:

Machine learning is a powerful tool that can help automate clinical processes such as electrocardiogram (EKG) analysis at the level of professionally trained cardiologists. However, due to the sensitive nature of the data being processed, there exists an incentive for medical fraud and other unethical behavior, so it is of utmost importance to employ secure neural networks. A modern threat comes in the form of an adversarial attack: A specially targeted algorithm that generates imperceptibly false data that drastically lowers the classification accuracy of the target machine learning model.

I work to design defenses against such attacks by first creating adversarial attacks and then using them to test my defenses. Specifically, I implemented a projected gradient descent algorithm (PGD), which is the most widely deployable form of adversarial attack. This function seeks to maximize classification error by performing one-step updates in the direction of steepest loss which is determined by the sign of the adversarial loss gradient. I then implemented two different adversarial defense techniques: randomized external preprocessing or adversarial training. Randomized input processing involves arbitrarily reshaping all rhythms before they are passed into the model, thus dampening the effect of any perturbations present in the EKG sample. Adversarial training involves injecting adversarial samples into the training set of the model at hand, resulting in a more robust model that will be less affected by adversarial attacks. I found that neither technique alone can provide a robust defense, but when combined, they prove effective against adversarial attacks.