aisecurity

There are 29 repositories under aisecurity topic.

  • StavC/ComPromptMized

    ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications

    Language:Python19510627
  • Machine_Learning_CTF_Challenges

    alexdevassy/Machine_Learning_CTF_Challenges

    CTF challenges designed and implemented in machine learning applications

    Language:HTML1193326
  • vger

    JosephTLucas/vger

    An interactive CLI application for interacting with authenticated Jupyter instances.

    Language:Python50234
  • AnthenaMatrix/Website-Prompt-Injection

    Website Prompt Injection is a concept that allows for the injection of prompts into an AI system via a website's. This technique exploits the interaction between users, websites, and AI systems to execute specific prompts that influence AI behavior.

    Language:HTML321110
  • AnthenaMatrix/Image-Prompt-Injection

    Image Prompt Injection is a Python script that demonstrates how to embed a secret prompt within an image using steganography techniques. This hidden prompt can be later extracted by an AI system for analysis, enabling covert communication with AI models through images.

    Language:Python241113
  • AnthenaMatrix/AI-Prompt-Injection-List

    AI/LLM Prompt Injection List is a curated collection of prompts designed for testing AI or Large Language Models (LLMs) for prompt injection vulnerabilities. This list aims to provide a comprehensive set of prompts that can be used to evaluate the behavior of AI or LLM systems when exposed to different types of inputs.

  • reds-lab/ASSET

    This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms." ASSET achieves state-of-the-art reliability in detecting poisoned samples in end-to-end supervised learning/ self-supervised learning/ transfer learning.

    Language:Python17320
  • shaialon/ai-security-demos

    🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded session:

    Language:JavaScript16103
  • AnthenaMatrix/ASCII-Art-Prompt-Injection

    ASCII Art Prompt Injection is a novel approach to hacking AI assistants using ASCII art. This project leverages the distracting nature of ASCII art to bypass security measures and inject prompts into large language models, such as GPT-4, leading them to provide unintended or harmful responses.

  • GURPREETKAURJETHRA/LLM-SECURITY

    Securing LLM's Against Top 10 OWASP Large Language Model Vulnerabilities 2024

  • bosch-aisecurity-aishield/Reference-Implementations

    This repo contains reference implementations, tutorials, samples, and documentation for working with Bosch AIShield

    Language:Jupyter Notebook103113
  • balavenkatesh3322/guardrails-demo

    LLM Security Project with Llama Guard

    Language:Python9300
  • AnthenaMatrix/AI-Vulnerability-Assessment-Framework

    The AI Vulnerability Assessment Framework is an open-source checklist designed to guide users through the process of assessing the vulnerability of artificial intelligence (AI) systems to various types of attacks and security threats

  • plll4zzx/Awesome-LLM-Watermark

    A collection list for Large Language Model (LLM) Watermark

    81
  • ngatilio/CertEye

    Zero Trust AI 360

    Language:CSS6302
  • StavC/PromptWares

    A Jailbroken GenAI Model Can Cause Real Harm: GenAI-powered Applications are Vulnerable to PromptWares

    Language:Jupyter Notebook6202
  • wwa/FIMjector

    FIMjector is an exploit for OpenAI GPT models based on Fill-In-the-Middle (FIM) tokens.

  • AiShieldsOrg/AiShieldsWeb

    AiShields is an open-source Artificial Intelligence Data Input and Output Sanitizer

    Language:Python3104
  • ZySec-AI/.github

    ZySec AI: Empowering Security with AI for AI

    20
  • ai-risk-armour/Vulnerable-AI-Chatbot

    An intentionally vulnerable AI chatbot to learn and practice AI Security.

    Language:HTML1103
  • milosilo/RateMyAI

    Prompt Engineering Tool for AI Models with cli prompt or api usage

    Language:Python1101
  • N372unn32/AI-ML-LLM-security-resources

    list of resources for AI/ML/LLM security

  • LAiSR-SK/fool-X-Attack

    This research exploring [Research Idea in a few words]. This work [Specific benefit of research] holds promise for [Positive impact].

    Language:Python0000
  • LAiSR-SK/target-x

    This research explores a novel targeted attack for neural network classifiers. This research has been led by Dr.Samer Khamaiseh with ongoing efforts of Deirdre Jost and Steven Chiacchira

    Language:Python0001
  • waterluy/MA2T

    ✨ Codes for MA2T Adversarial Training.

    Language:Python00
  • wearetyomsmnv/berterpretation

    Bert models interpretation and security checker

    Language:Python0200
  • alvingeo/smartaitower

    The SmartAiTower concept presents a scalable and cost-effective solution for AI model management, particularly focused on Azure OpenAI.

    Language:Python