amnaBooq's Stars
protectai/llm-guard
The Security Toolkit for LLM Interactions
whylabs/langkit
๐ LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). ๐ Extracts signals from prompts & responses, ensuring safety & security. ๐ก๏ธ Features include text quality, relevance metrics, & sentiment analysis. ๐ A comprehensive tool for LLM observability. ๐
sinanw/llm-security-prompt-injection
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using classical ML algorithms, a trained LLM model, and a fine-tuned LLM model.
compass-ctf-team/prompt_injection_research
This research proposes defense strategies against prompt injection in large language models to improve their robustness and security against unwanted outputs.
ZiyueWang25/llm-security-challenge
Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the OverTheWire wargames environment, showing the models' surprising ability to do action-oriented cyberexploits in shell environments
AIAnytime/Guardrails-Implementation-in-LLMs
Guardrails implementation in Generative AI powered apps. This app will show you how to put Guardrails in LLMs.
balavenkatesh3322/guardrails-demo
LLM Security Project with Llama Guard
tuhh-softsec/LLM4SecDev
Community-driven effort to facilitate discovery, access and systematization of data related to Large Language Models used for security perposes.
Alexyskoutnev/SecurityGPT
This repository contains code, models, and resources related to our research project on enhancing the classification of Security Bug Reports (SBRs) within software source code using Large Language Models (LLMs). We have developed and fine-tuned LLMs to address the critical task of identifying security vulnerabilities in code.
chris17453/docu-nator
A for documenting python code with AI (LLMs) with guard rails via static analysis
Nathangitlab/JailBreak-Large-Language-Model-With-A-Malicous-System-Role
We present a novel method that can jailbreak large language model with a malicous system role. It releases the potentially unethical or illegal intention of leveraging a large language model, like ChatGPT, to breach the security measures put in place to limit its access and permissions within a controlled environment.
dipes08/ChatSEC
This program uses Large Language Models (LLMs) from Meta and OpenAI to provide answers to any question regarding a companyโs filings made to the Securities Exchange Commission. To run this program, you must have access to WRDS. Huggingface, Meta Llama-2 models. and OpenAI API keys.
SaahasKumarGit/GPT-4U
Hi! GPT - 4U is a modern and security-focused interface for Large Language Models. I've included a host of features that I hope you enjoy. Feel free to use it, share it, modify it, and use it however you want (legally of course ๐). No need to attribute me or even ask for my permission. It's completely free!
SuperAier/DBSW
World-leading database security system based on large language models
BenderScript/owasp_llm_analysis
Large Language Models Security Analsysis
ybdesire/CyberSecurityLLMTest
Test (data/prompt) if large language model (GPT model) has the ability to preform as cyber security expert.
Abhisandy/LLM_Guardrails
Implementatiom of LLM Guardrails
Andy6201/Prompt-Sets-of-Multidimensional-Adversarial-Examples-in-LLMs-for-ESIIP
This is a set of multidimensional adversarial prompts for evaluating the ability of large language models to recognize ethical and security issues.
BenedictusAryo/llm_guardrails_nemo
LLM Guardrails using NEMO Guardrails
isavita/llm-guardrails-experiments
nodite/llm-guard-ts
The Security Toolkit for LLM Interactions (TS version)
rdmusrname/securellms
Safeguarding the learning ecosystem through AI-powered Large Language Models (LLMS) security.
sheshiisree/Q-A-bot
Personalized bot utilizing the capabilities of a Large Language Model (LLM) to engage with your network security documents.