/deepdot

Detection of Trojan Attack Against Deep Neural Networks

Primary LanguageJupyter NotebookMIT LicenseMIT

DeepDOT

Detection of Trojan Attacks against Deep Neural Networks.

Project Introdution

With a rapid increase in available computing power and a desire for better solutions to various problems that cannot be directly solved by algorithms, neural networks are gaining popularity not only in academic research but also in real-life applications. Neural networks are also gaining mainstream attention as they are being used in mission-critical situations like self-driving cars. This makes security a very real concern. this will become a practical concern in a lot of consumer-grade products in a few years. Being a relatively new research area, there is a lot of opportunities to theorize, experiment and find new things.

The goal of this project is to inspect trojan attacks on neural networks and look into how such attacks can be detected and mitigated.

License

MIT License. Copyright © Nikhil Ramakrishnan and Veeki Yadav.

Sources and Citations

Serban, Alexandru Constantin, and Erik Poll. "Adversarial examples-a complete characterisation of the phenomenon." arXiv preprint arXiv:1810.01185 (2018).

Liu, Y., Ma, S., Aafer, Y., Lee, W. C., Zhai, J., Wang, W., & Zhang, X. (2017). Trojaning attack on neural networks.

Liu, T., Wen, W., & Jin, Y. (2018, April). SIN 2: Stealth infection on neural network—a low-cost agile neural trojan attack methodology. In 2018 IEEE International Symposium on Hardware Oriented Security and Trust (HOST) (pp. 227-230). IEEE.

Baluta, T., Shen, S., Shinde, S., Meel, K. S., & Saxena, P. (2019). Quantitative Verification of Neural Networks And its Security Applications. arXiv preprint arXiv:1906.10395.

Gao, Y., Xu, C., Wang, D., Chen, S., Ranasinghe, D. C., & Nepal, S. (2019). STRIP: A Defence Against Trojan Attacks on Deep Neural Networks. arXiv preprint arXiv:1902.06531.

Chen, B., Carvalho, W., Baracaldo, N., Ludwig, H., Edwards, B., Lee, T., & Srivastava, B. (2018). Detecting backdoor attacks on deep neural networks by activation clustering. arXiv preprint arXiv:1811.03728.