Pinned Repositories
Adversarial-Reading
Paper sharing in adversary related works
Adversarial-Training-for-Free
Unofficial implementation of the paper 'Adversarial Training for Free'
DDN-attack
dino
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
Interpretability-methods-for-self-supervised-and-supervised-models
In recent years, the rapid development of Deep Neural Networks (DNN) has led to a remarkable performance in many complex tasks in the field of computer vision at the cost of the models’ complexity. The more complex the models get, the higher the need is for understanding them. The primary objective of this repo is to give visual explanations on what both supervised and self-supervised methods really learn during training. Self-supervised and supervised state-of-the-art pre-trained models will be investigated. As backbone networks, for both categories convnets and Transformers based architectures will be used. Variation of visualization techniques will be used.
NMS
Accelerating MOEA/D by Nelder-Mead method
OptiCAM
code for the paper: "Opti-CAM: Optimizing saliency maps for interpretability"
SmoothAdversarialExamples
Smooth Adversarial Examples
Tree-structured-decomposition-and-adaptation-in-moea-d
Tree-structured decomposition and adaptation in moea/d
walking-on-the-edge-fast-low-distortion-adversarial-examples
Walking on the Edge: Fast, Low-Distortion Adversarial Examples
hanwei0912's Repositories
hanwei0912/Adversarial-Reading
Paper sharing in adversary related works
hanwei0912/walking-on-the-edge-fast-low-distortion-adversarial-examples
Walking on the Edge: Fast, Low-Distortion Adversarial Examples
hanwei0912/SmoothAdversarialExamples
Smooth Adversarial Examples
hanwei0912/OptiCAM
code for the paper: "Opti-CAM: Optimizing saliency maps for interpretability"
hanwei0912/DDN-attack
hanwei0912/Interpretability-methods-for-self-supervised-and-supervised-models
In recent years, the rapid development of Deep Neural Networks (DNN) has led to a remarkable performance in many complex tasks in the field of computer vision at the cost of the models’ complexity. The more complex the models get, the higher the need is for understanding them. The primary objective of this repo is to give visual explanations on what both supervised and self-supervised methods really learn during training. Self-supervised and supervised state-of-the-art pre-trained models will be investigated. As backbone networks, for both categories convnets and Transformers based architectures will be used. Variation of visualization techniques will be used.
hanwei0912/NMS
Accelerating MOEA/D by Nelder-Mead method
hanwei0912/Tree-structured-decomposition-and-adaptation-in-moea-d
Tree-structured decomposition and adaptation in moea/d
hanwei0912/Adversarial-Training-for-Free
Unofficial implementation of the paper 'Adversarial Training for Free'
hanwei0912/dino
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
hanwei0912/foolbox
Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, Keras, …
hanwei0912/hanwei0912.github.io
Hanwei | Homepage
hanwei0912/jMetalPy
A framework for single/multi-objective optimization with metaheuristics
hanwei0912/knn-defense
Defending Against Adversarial Examples with K-Nearest Neighbor
hanwei0912/MathSTIC-UBL
Doctoral Thesis Class for the MathSTIC Doctoral School / Université Bretagne Loire
hanwei0912/Quantus
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
hanwei0912/zutils