dayelang618's Stars
groepl/Obsidian-Zettelkasten-Starter-Kit
A Starter Kit for Obsidian with all essential elements to build up your own Zettelkasten system.
liguodongiot/llm-action
本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)
CSIPlab/context-aware-attacks
Implementation of AAAI 2022 Paper: Context-Aware Transfer Attacks for Object Detection
WZMIAOMIAO/deep-learning-for-image-processing
deep learning for image processing including classification and object-detection etc.
mrdbourke/pytorch-deep-learning
Materials for the Learn PyTorch for Deep Learning: Zero to Mastery course.
Harry24k/CW-pytorch
A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"
wuhanstudio/whitebox-adversarial-toolbox
Real-time White-Box attacks against Object Detection.
LiangSiyuan21/Adversarial-Attacks-for-Image-and-Video-Object-Detection
A Implementation of IJCAI-19(Transferable Adversarial Attacks for Image and Video Object Detection)
idrl-lab/Adversarial-Attacks-on-Object-Detectors-Paperlist
A Paperlist of Adversarial Attack on Object Detection
wuhanstudio/adversarial-detection
Adversarial Detection v.s. Object Detection.
TranquilRock/Pytorch-Adversarial-Object-Detection-Toolkit
Compose desired image with data such that will cause pretrained models misbehave.
NeuralSec/Daedalus-attack
The code of our paper: 'Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples', in Tensorflow.
veralauee/DPatch
An adversarial attack on object detectors
omidmnezami/pick-object-attack
Type-Specific Adversarial Attack for Object Detection
git-disl/TOG
Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While DNN-powered object detection systems celebrate many life-enriching opportunities, they also open doors for misuse and abuse. This project presents a suite of adversarial objectness gradient attacks, coined as TOG, which can cause the state-of-the-art deep object detection networks to suffer from untargeted random attacks or even targeted attacks with three types of specificity: (1) object-vanishing, (2) object-fabrication, and (3) object-mislabeling. Apart from tailoring an adversarial perturbation for each input image, we further demonstrate TOG as a universal attack, which trains a single adversarial perturbation that can be generalized to effectively craft an unseen input with a negligible attack time cost. Also, we apply TOG as an adversarial patch attack, a form of physical attacks, showing its ability to optimize a visually confined patch filled with malicious patterns, deceiving well-trained object detectors to misbehave purposefully.
lyhue1991/eat_pytorch_in_20_days
Pytorch🍊🍉 is delicious, just eat it! 😋😋
bubbliiiing/yolov8-pytorch
这是一个yolov8-pytorch的仓库,可以用于训练自己的数据集。
centerforaisafety/HarmBench
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
RobustBench/robustbench
RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]
Harry24k/adversarial-attacks-pytorch
PyTorch implementation of adversarial attacks [torchattacks]
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
spmallick/learnopencv
Learn OpenCV : C++ and Python Examples
open-mmlab/mmdetection
OpenMMLab Detection Toolbox and Benchmark
dhm2013724/yolov2_xilinx_fpga
A demo for accelerating YOLOv2 in xilinx's fpga pynq/zedboard
jgoeders/dac_sdc_2021_designs
PKUFlyingPig/cs-self-learning
计算机自学指南
gigwegbe/tinyml-papers-and-projects
This is a list of interesting papers and projects about TinyML.
analogdevicesinc/hdl
HDL libraries and projects
dgschwend/zynqnet
Master Thesis "ZynqNet: An FPGA-Accelerated Embedded Convolutional Neural Network"