/AI-Security-Paper

This resource mainly counts papers related to APT attacks, including APT traceability, APT knowledge graph construction, APT malicious sample detection, and APT overview. Hope these summarized papers are helpful to you~

AI-Security-Paper

This resource mainly counts papers related to APT attacks, including APT traceability, APT knowledge graph construction, APT malicious sample detection, and APT overview. Hope these summarized papers are helpful to you~

常用工具:

安全团队介绍(国外):

安全团队介绍(国内):

安全学术大佬博客:

安全数据集:

安全经典综述:

其他学习:


初学者论文技巧:(——学习至中科院王老师)

📃 目标领域重要文章

  • Survey + 关键词 -> 谷歌学术
  • papers + 关键词 -> DBLP/GitHub/知乎
  • paperswithcode(https://paperswithcode.com) -> leaderboard
  • 经典论文 -> related work & cited by

🗂 管理

📃 读论文

  • 题目+ 摘要 + Intro
  • 图表 + 图表描述部分
  • 分析总结:段 -> section -> 整篇文章 -> 结构/逻辑

🗂 写论文

📬 投稿


PS:论文后续会详细整理补充,目前忙碌中....

一.Classified by subject

APT


Knowledge Graph + Security


GNN\DNN\CNN\RNN + Security


Malware Family Clustering and Classification


Malware Analysis


Intrusion Detection System

By: Update 2021-12-29


Interesting repositories

APT资源

其他资源


AI 对抗样本

文本攻击与防御的论文概述

(1) 文本攻击与防御的论文概述

  • Analysis Methods in Neural Language Processing: A Survey. Yonatan Belinkov, James Glass. TACL 2019.
  • Towards a Robust Deep Neural Network in Text Domain A Survey. Wenqi Wang, Lina Wang, Benxiao Tang, Run Wang, Aoshuang Ye. 2019.
  • Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey. Wei Emma Zhang, Quan Z. Sheng, Ahoud Alhazmi, Chenliang Li. 2019.

(2) 黑盒攻击

  • PAWS: Paraphrase Adversaries from Word Scrambling. Yuan Zhang, Jason Baldridge, Luheng He. NAACL-HLT 2019.
  • Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems. Steffen Eger, Gözde Gül ¸Sahin, Andreas Rücklé, Ji-Ung Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, Iryna Gurevych.NAACL-HLT 2019.
  • Adversarial Over-Sensitivity and Over-Stability Strategies for Dialogue Models. Tong Niu, Mohit Bansal. CoNLL 2018.
  • Generating Natural Language Adversarial Examples. Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, Kai-Wei Chang. EMNLP 2018.
  • Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. Max Glockner, Vered Shwartz, Yoav Goldberg ACL 2018.
  • AdvEntuRe: Adversarial Training for Textual Entailment with Knowledge-Guided Examples. Dongyeop Kang, Tushar Khot, Ashish Sabharwal, Eduard Hovy. ACL 2018.
  • Semantically Equivalent Adversarial Rules for Debugging NLP Models. Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin ACL 2018.
  • Robust Machine Comprehension Models via Adversarial Training. Yicheng Wang, Mohit Bansal. NAACL-HLT 2018.
  • Adversarial Example Generation with Syntactically Controlled Paraphrase Networks. Mohit Iyyer, John Wieting, Kevin Gimpel, Luke Zettlemoyer. NAACL-HLT 2018.
  • Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers. Ji Gao, Jack Lanchantin, Mary Lou Soffa, Yanjun Qi. IEEE SPW 2018.
    https://arxiv.org/pdf/1801.04354.pdf
  • Synthetic and Natural Noise Both Break Neural Machine Translation. Yonatan Belinkov, Yonatan Bisk. ICLR 2018.
  • Generating Natural Adversarial Examples. Zhengli Zhao, Dheeru Dua, Sameer Singh. ICLR 2018. Adversarial Examples for Evaluating Reading Comprehension Systems. Robin Jia, and Percy Liang. EMNLP 2017.

(3) 白盒攻击

  • On Adversarial Examples for Character-Level Neural Machine Translation. Javid Ebrahimi, Daniel Lowd, Dejing Dou. COLING 2018.
  • HotFlip: White-Box Adversarial Examples for Text Classification. Javid Ebrahimi, Anyi Rao, Daniel Lowd, Dejing Dou. ACL 2018.
  • Towards Crafting Text Adversarial Samples. Suranjana Samanta, Sameep Mehta. ECIR 2018.

(4) 同时探讨黑盒和白盒攻击

  • TEXTBUGGER: Generating Adversarial Text Against Real-world Applications. Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, Ting Wang. NDSS 2019.
  • Comparing Attention-based Convolutional and Recurrent Neural Networks: Success and Limitations in Machine Reading Comprehension. Matthias Blohm, Glorianna Jagfeld, Ekta Sood, Xiang Yu, Ngoc Thang Vu. CoNLL 2018.
  • Deep Text Classification Can be Fooled. Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, Wenchang Shi.IJCAI 2018.

(5) 对抗防御

  • Combating Adversarial Misspellings with Robust Word Recognition. Danish Pruthi, Bhuwan Dhingra, Zachary C. Lipton. ACL 2019. 评估

(6) 对文本攻击和防御研究提出新的评价方法

  • On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models. Paul Michel, Xian Li, Graham Neubig, Juan Miguel Pino. NAACL-HLT 2019

https://www.cnblogs.com/zzxb/p/13246967.html

《网络攻防实践》实践作业


NLP经典论文

图神经网络

  • 01.Node2Vec:Node2Vec: Scalable Feature Learning for Networks
  • 02.LINE:LINE: Large-scale Information Network Embedding
  • 03.SDNE:Structural Deep Network Embedding
  • 04.metapath2vec:metapath2vec:Scalable Representation Learning for Heterogeneous Networks
  • 05.TransE/H/R/D: TransE:Translating Embeddings for Modeling Multi-relational Data
    TransH:Knowledge Graph Embedding by Translating on Hyperplanes
    TransR:Learning entity and relation embeddings for knowledge graph completion
    TransD:Knowledge Graph Embedding via Dynamic Mapping Matrix
  • 06.GAT:Graph Attention Networks
  • 07.GraphSAGE:Inductive Representation kearping on Large Graphs
  • 08.GCN:Semi-Supervised Classification with Graph Convolutional Networks
  • 09.GGNN:Gated Graph Sequence Neural Networks
  • 10.MPNN:Neural Message Passing for Quantum Chemistry

NLP精读论文目录

  • 01.Deep learning:Deep learning
  • 02.word2vec:Efficient Estimation of Word Representations in Vector Space
  • 03.句和文档的embedding:Distributed representations of sentences and docments
  • 04.machine translation:Neural Machine Translation by Jointly Learning to Align and Translate
  • 05.transformer:Transformer: attention is all you need
  • 06.GloVe:GloVe: Global Vectors for Word Representation
  • 07.Skip:Skip-Thought Vector
  • 08.TextCNN:Convolutional Neural Networks for Sentence Classification
  • 09.基于CNN的词级别的文本分类:Character-level Convolutional Networks for Text Classification
  • 10.DCNN:A Convolutional Neural Network For Modelling Sentences
  • 11.FASTTEXT:Bag of Tricks for Efficient Text Classification
  • 12.HAN:Hierarchical Attention Network for Document Classification
  • 13.PCNNATT:Neural Relation Extraction with Selective Attention over Instances
  • 14.E2ECRF:End-to-end Sequence Labeling via Bi-directional LSTM-CNNS-CRF
  • 15.多层LSTM:Sequence to Sequence Learning with Neural Networks
  • 16.卷积seq2seq:Convolutional Sequence to Sequence Learning
  • 17.GNMT:Google’s Neural Machine Translation System:Bridging the Gap between Human and Machine Translation
  • 18.UMT:Phrase-Based&Neural Unsupervised Machine Translation
  • 19.指针生成网络:Get To The Point:Summarization with Pointer-Generator Networks
  • 20.End-to-End Memory Networks:End-to-End Memory Networks
  • 21.QANet:QANet:Combining Local Convolution with Global Self-Attention for Reading Comprehension
  • 22.双向Attention:Bi-Directional Attention Flow for Machine Comprehension
  • 23.Dialogue:Adversarial Learning for Neural Dialogue Generation
  • 24.缺
  • 25.R-GCNs:Modeling Relational Data with GraphConvolutional Networks
  • 26.大规模语料模型:Exploring the limits of language model
  • 27.Transformer-XL:Transformer-XL:Attentive Language Models Beyond a Fixed-Length Context
  • 28.TCN:An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
  • 29.Deep contextualized word representations
  • 30.BERT:Pre-training of Deep Bidirectional Transformers for Language Understanding

NLP Baseline

  • 1.Word2Vec.Efficient Estimation of Word Representations in Vector Space
  • 2.GloVe.GloVe: Global Vectors for Word Representation
  • 3.C2W.Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation
  • 4.TextCNN.Convolutional Neural Networks for Sentence Classification
  • 5.CharCNN.Character-level Convolutional Networks for Text Classification
  • 6.FastText.Bag of Tricks for Efficient Text Classification
  • 7.Seq2Seq.Sequence to Sequence Learning with Neural Networks
  • 8.Attention NMT.Neural Machine Translation by Jointly Learning to Align and Translate
  • 9.HAN.Hierarchical Attention Network for Document Classification
  • 10.SGM.SGM: Sequence Generation Model for Multi-Label Classification

GAN


二.Classified by source

Conferences & Journals Abroad


Chinese Conference & Periodical


Enterprise Analysis Report


Time

2021-04-19:撰写恶意代码相关论文

读博艰辛,努力前行~


By:Eastmount 2022-09-26