Pinned Repositories
awesome-LLM-game-agent-papers
A Survey on Large Language Model-Based Game Agents
awesome_LLM-harmful-fine-tuning-papers
A survey on harmful fine-tuning attack for large language model
BERT4ETH
BERT4ETH: A Pre-trained Transformer for Ethereum Fraud Detection (WWW23)
CPL_attack
DataPoisoning_FL
Code for Data Poisoning Attacks Against Federated Learning Systems
EllipticPlusPlus
Elliptic++ Dataset: A Graph Network of Bitcoin Blockchain Transactions and Wallet Addresses
GPTLens
Large Language Model-Powered Smart Contract Vulnerability Detection: New Perspectives (TPS23)
PokeLLMon
scale-fl
Code for ScaleFL
TOG
Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While DNN-powered object detection systems celebrate many life-enriching opportunities, they also open doors for misuse and abuse. This project presents a suite of adversarial objectness gradient attacks, coined as TOG, which can cause the state-of-the-art deep object detection networks to suffer from untargeted random attacks or even targeted attacks with three types of specificity: (1) object-vanishing, (2) object-fabrication, and (3) object-mislabeling. Apart from tailoring an adversarial perturbation for each input image, we further demonstrate TOG as a universal attack, which trains a single adversarial perturbation that can be generalized to effectively craft an unseen input with a negligible attack time cost. Also, we apply TOG as an adversarial patch attack, a form of physical attacks, showing its ability to optimize a visually confined patch filled with malicious patterns, deceiving well-trained object detectors to misbehave purposefully.
git-disl's Repositories
git-disl/awesome-LLM-game-agent-papers
A Survey on Large Language Model-Based Game Agents
git-disl/PokeLLMon
git-disl/TOG
Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While DNN-powered object detection systems celebrate many life-enriching opportunities, they also open doors for misuse and abuse. This project presents a suite of adversarial objectness gradient attacks, coined as TOG, which can cause the state-of-the-art deep object detection networks to suffer from untargeted random attacks or even targeted attacks with three types of specificity: (1) object-vanishing, (2) object-fabrication, and (3) object-mislabeling. Apart from tailoring an adversarial perturbation for each input image, we further demonstrate TOG as a universal attack, which trains a single adversarial perturbation that can be generalized to effectively craft an unseen input with a negligible attack time cost. Also, we apply TOG as an adversarial patch attack, a form of physical attacks, showing its ability to optimize a visually confined patch filled with malicious patterns, deceiving well-trained object detectors to misbehave purposefully.
git-disl/awesome_LLM-harmful-fine-tuning-papers
A survey on harmful fine-tuning attack for large language model
git-disl/BERT4ETH
BERT4ETH: A Pre-trained Transformer for Ethereum Fraud Detection (WWW23)
git-disl/EllipticPlusPlus
Elliptic++ Dataset: A Graph Network of Bitcoin Blockchain Transactions and Wallet Addresses
git-disl/GPTLens
Large Language Model-Powered Smart Contract Vulnerability Detection: New Perspectives (TPS23)
git-disl/Vaccine
This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)
git-disl/LRBench
A learning rate recommending and benchmarking tool.
git-disl/Lockdown
A backdoor defense for federated learning via isolated subspace training (NeurIPS2023)
git-disl/EnsembleBench
A holistic framework for promoting high diversity ensemble learning.
git-disl/Lisa
This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)
git-disl/EENet
Code for Adaptive Deep Neural Network Inference Optimization with EENet
git-disl/EMO
Efficient Multi-Object Tracking for Edge devices
git-disl/GTDLBench
Benchmarking Deep Learning Frameworks
git-disl/recap
Code for CVPR24 Paper - Resource-Efficient Transformer Pruning for Finetuning of Large Models
git-disl/ZipZap
git-disl/Booster
This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturbation".
git-disl/STDLens
git-disl/llm-topla
git-disl/Chameleon
git-disl/HeteRobust
git-disl/Fed-alphaCDP
This repo is for paper Securing Distributed SGD against Gradient Leakage Threats submitted to IEEE TPDS.
git-disl/ModelCloak
Code for ICDM 2023 Model Cloaking against Gradient Leakage
git-disl/GRING
git-disl/LRBenchPlusPlus
git-disl/PFT
git-disl/Atlas
Atlas, a hybrid cloud migration advisor offers migration recommendations with customizable performance, cost and availability trade-offs. Also check our API resource estimation work below.
git-disl/CasTformer
git-disl/GPTLens-Demo
This demo provides a scenario-based walkthrough of how GPTLens utilizes LLMs to examine smart contract code and detect vulnerabilities.