/Research-Ai-Cybersec

A collection of resources to start off researching AI in CyberSecurity

Reseach-AI-CyberSecurity

A collection of resources to start off researching AI in CyberSecurity

Contributions welcome

If you want to contribute to this list, by all means.

Check out also

Table of Contents

  • Basic Introduction
  • Articles

Basic Introduction

To help classify and aggergrate research there will be 4 main categories for this repo:

  1. Exploiting the Training Process or at Data collection
    • Refered to as poisoning, is the process of injecting maliscous or faulty data to a Machine Learning model.
  2. Exploiting a Pre-trained model
    • Refers to exploiting a ML's input to get a desired output.
  3. ML/AI Supported hacking
    • Refers to offensive hacking use AI/ML tools.
  4. ML/AI Supported Security
    • Refers to Information Security use of AI/ML tools.

Problems in AI and Machine Learning Models

There are numerous amounts of ways to exploit a machine learning model. Most of which can be summed down into these categories.

Evasion attacks - Hackers provide faulty algorithm inputs, leading to incorrect decisions.

Poisoning attacks - Hackers provide poisoned data for training sets. which corrupt the machine learning algorithm and spoil the data mining process.

Inference attacks - Hackers use the training phase to try and retrieve private data.

Extraction attacks - Hackers steal the Machine Learning model

What are Evasion attacks?

Evasion attacks are essentially prividing the model with an input and getting an output.Some really good examples are what users are doing with ChatGPT3. There are 'strict' guidelines for the model to follow but with a simple "convincing" statement the chatbot will break those rules for you.

Articles

Arxiv.org

  • Explaining and Harnessing Advesarial Examples Source

IEEE.org

  • Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation Source
  • Federated Learning With Differential Privacy: Algorithms and Performance Analysis Source
  • PAST-AI: Physical-Layer Authentication of Satellite Transmitters via Deep Learning Source
  • Privacy-Preserving Deep Learning via Additively Homomorphic Encryption Source
  • Exploring Bias in Sclera Segmentation Models: A Group Evaluation Approach Source
  • Occlusion-Aware Human Mesh Model-Based Gait Recognition Source

TrendMicro

  • How Cybercriminals Misuse and Abuse AI and ML Source