mistycheney's Stars
openai/whisper
Robust Speech Recognition via Large-Scale Weak Supervision
lm-sys/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
linexjlin/GPTs
leaked prompts of GPTs
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Vision-CAIR/MiniGPT-4
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
camenduru/stable-diffusion-webui-colab
stable diffusion webui colab
BradyFU/Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
facebookresearch/xformers
Hackable and optimized Transformers building blocks, supporting a composable construction.
autogluon/autogluon
Fast and Accurate ML in 3 Lines of Code
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
thunlp/OpenPrompt
An Open-Source Framework for Prompt-Learning.
Giskard-AI/giskard
🐢 Open-Source Evaluation & Testing for ML models & LLMs
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
verazuo/jailbreak_llms
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
Harry24k/adversarial-attacks-pytorch
PyTorch implementation of adversarial attacks [torchattacks]
greshake/llm-security
New ways of breaking app-integrated LLMs
Azure/PyRIT
The Python Risk Identification Tool for generative AI (PyRIT) is an open access automation framework to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.
protectai/ai-exploits
A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities
leondz/garak
LLM vulnerability scanner
lucidrains/flamingo-pytorch
Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
corca-ai/awesome-llm-security
A curation of awesome tools, documents and projects about LLM Security.
aws-samples/bedrock-claude-chat
AWS-native chatbot using Bedrock + Claude (+Mistral)
CambioML/pykoi-rlhf-finetuned-transformers
pykoi: Active learning in one unified interface
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. 🏆 Best Paper Awards @ NeurIPS ML Safety Workshop 2022
nateraw/huggingpics
🤗🖼️ HuggingPics: Fine-tune Vision Transformers for anything using images found on the web.
AI-secure/DecodingTrust
A Comprehensive Assessment of Trustworthiness in GPT Models
facebookresearch/iopath
A python library that provides common I/O interface across different storage backends.
byerose/Awesome-Foundation-Model-Security
A curated list of trustworthy Generative AI papers. Daily updating...
wang-research-lab/roz
Code repo for "Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A Pilot Study"