Pinned Repositories
GAKer
(ECCV2024) Any Target Can be Offense: Adversarial Example Generation via Generalized Latent Infection
Natural-Color-Fool
This repository is the official implementation of [Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks (NeurIPS'22)](https://arxiv.org/abs/2210.02041).
Adversarial-attack-on-Person-ReID-With-Deep-Mis-Ranking
This is a pytorch implementation of the CVPR2020 paper: Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking
Adversarial_Attacks_and_Defense_NeurIPS2022
A list of papers in NeurIPS 2022 related to adversarial attack and defense / AI security.
adversarial_image_defenses
Countering Adversarial Image using Input Transformations.
ant-design-pro
👨🏻💻👩🏻💻 Use Ant Design like a Pro!
backdoor-learning-resources
A list of backdoor learning resources
ICML2024-paperlist
Summaries of ICML 2024 papers
MMDNN_simple_example
MMDNN_simple_example: convert tensorflow model to pytorch model
tf_to_pytorch_model
Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.
ylhz's Repositories
ylhz/tf_to_pytorch_model
Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.
ylhz/Adversarial_Attacks_and_Defense_NeurIPS2022
A list of papers in NeurIPS 2022 related to adversarial attack and defense / AI security.
ylhz/ICML2024-paperlist
Summaries of ICML 2024 papers
ylhz/MMDNN_simple_example
MMDNN_simple_example: convert tensorflow model to pytorch model
ylhz/Adversarial-attack-on-Person-ReID-With-Deep-Mis-Ranking
This is a pytorch implementation of the CVPR2020 paper: Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking
ylhz/ant-design-pro
👨🏻💻👩🏻💻 Use Ant Design like a Pro!
ylhz/backdoor-learning-resources
A list of backdoor learning resources
ylhz/benchmark_results
Visual Tracking Paper List
ylhz/carla
Open-source simulator for autonomous driving research.
ylhz/CompilerGym
A reinforcement learning toolkit for compiler optimizations
ylhz/deep-learning-for-image-processing
deep learning for image processing including classification and object-detection etc.
ylhz/DRN
Closed-loop Matters: Dual Regression Networks for Single Image Super-Resolution
ylhz/easy-scraping-tutorial
Simple but useful Python web scraping tutorial code.
ylhz/exposure
Learning infinite-resolution image processing with GAN and RL from unpaired image datasets, using a differentiable photo editing model.
ylhz/generators-with-stylegan2
Here is a series of face generators based on StyleGAN2
ylhz/MMdnn
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.
ylhz/models
Models and examples built with TensorFlow
ylhz/Neural-IMage-Assessment
A PyTorch Implementation of Neural IMage Assessment
ylhz/NRP
Official repository for "A Self-supervised Approach for Adversarial Robustness" (CVPR 2020--Oral)
ylhz/paper_search.github.io
🔍
ylhz/Patch-wise-iterative-attack
Patch-wise iterative attack (accepted by ECCV 2020) to improve the transferability of adversarial examples.
ylhz/PatchAttack
ylhz/PDFToExcel
This is the companion repository of the article https://tomassetti.me/how-to-convert-a-pdf-to-excel/
ylhz/PyTorch-Course
JULYEDU PyTorch Course
ylhz/releasing-research-code
Tips for releasing research code in Machine Learning (with official NeurIPS 2020 recommendations)
ylhz/SimulatorAttack
The official implementation of CVPR 2021 paper "Simulating Unknown Target Models for Query-Efficient Black-box Attacks"
ylhz/smoothing
Provable adversarial robustness at ImageNet scale
ylhz/Swin-Transformer-Semantic-Segmentation
This is a fine-tuned version for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Semantic Segmentation.
ylhz/Visual-Adversarial-Examples-Jailbreak-Large-Language-Models
Repository for the Paper (preprint) --- Visual Adversarial Examples Jailbreak Large Language Models
ylhz/VT
Enhancing the Transferability of Adversarial Attacks through Variance Tuning