Pinned Repositories
AIRTBench-Code
Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models
burpference
A research project to add some brrrrrr to Burp
conferences
dyana
A sandbox environment designed for loading, running and profiling a wide range of files, including machine learning models, ELFs, Pickle, Javascript and more
example-agents
Example agents for the Dreadnode platform
parley
Tree of Attacks (TAP) Jailbreaking Implementation
research
General research for Dreadnode
rigging
Lightweight LLM Interaction Framework
robopages
A YAML based format for describing tools to LLMs, like man pages but for robots!
sdk
Dreadnode Strikes SDK
dreadnode's Repositories
dreadnode/rigging
Lightweight LLM Interaction Framework
dreadnode/dyana
A sandbox environment designed for loading, running and profiling a wide range of files, including machine learning models, ELFs, Pickle, Javascript and more
dreadnode/burpference
A research project to add some brrrrrr to Burp
dreadnode/parley
Tree of Attacks (TAP) Jailbreaking Implementation
dreadnode/robopages
A YAML based format for describing tools to LLMs, like man pages but for robots!
dreadnode/AIRTBench-Code
Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models
dreadnode/tensor-man
A utility to inspect, validate, sign and verify machine learning model files.
dreadnode/robopages-cli
CLI and API server for https://github.com/dreadnode/robopages
dreadnode/research
General research for Dreadnode
dreadnode/marque
Minimal workflows
dreadnode/paperstack
Arxiv + Notion Sync
dreadnode/conferences
dreadnode/example-agents
Example agents for the Dreadnode platform
dreadnode/cli
Dreadnode CLI
dreadnode/sdk
Dreadnode Strikes SDK
dreadnode/defcon_grt_notebook
Quickstart notebook for the DEF CON 32 Generative Red-teaming Challenge
dreadnode/sqladmin
SQLAlchemy Admin for FastAPI and Starlette
dreadnode/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
dreadnode/tensorrtllm_backend
The Triton TensorRT-LLM Backend
dreadnode/transformers-neuronx
dreadnode/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs