13o-bbr-bbq
I'm an engineer, Machine Learning hacker and CISSP. Speaker at Black Hat Arsenal, DEFCON Demo Labs/AI Village, Pycon, CODE BLUE etc.,
Tokyo, Japan.
13o-bbr-bbq's Stars
nomic-ai/gpt4all
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
facebookresearch/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
chartist-js/chartist
Simple responsive charts
ultralytics/yolov3
YOLOv3 in PyTorch > ONNX > CoreML > TFLite
qqwweee/keras-yolo3
A Keras implementation of YOLOv3 (Tensorflow backend)
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
pyauth/pyotp
Python One-Time Password Library
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
allanzelener/YAD2K
YAD2K: Yet Another Darknet 2 Keras
zzh8829/yolov3-tf2
YoloV3 Implemented in Tensorflow 2.0
Azure/PyRIT
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
mitre/advmlthreatmatrix
Adversarial Threat Landscape for AI Systems
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
Azure/counterfit
a CLI that provides a generic automation layer for assessing the security of ML models
OWASP/www-project-top-10-for-large-language-model-applications
OWASP Foundation Web Respository
mazen160/shennina
Automating Host Exploitation with AI
EQuiw/2019-scalingattack
Image-Scaling Attacks and Defenses
neulab/RIPPLe
Code for the paper "Weight Poisoning Attacks on Pre-trained Models" (ACL 2020)
bargavj/EvaluatingDPML
This project's goal is to evaluate the privacy leakage of differentially private machine learning models.
Lab41/cyphercat
Implementation of membership inference and model inversion attacks, extracting training data information from an ML model. Benchmarking attacks and defenses.
mitre-atlas/arsenal
CALDERA plugin for adversary emulation of AI-enabled systems
BishopFox/BrokenHill
A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)
trx14/TrojanNet
moohax/Proof-Pudding
Copy cat model for Proofpoint
ZANSIN-sec/ZANSIN
locuslab/breaking-poisoned-classifier
Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"
cybozu/prompt-hardener
Prompt Hardener is a tool designed to evaluate and enhance the securify of system prompts for RAG systems.
definitively-not-a-lab-rat/gamin