maxxbw54
Software Engineering Researcher
Assistant Professor at North Carolina State UniversityUnited States
maxxbw54's Stars
OpenBMB/ChatDev
Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
amazon-science/recode
Releasing code for "ReCode: Robustness Evaluation of Code Generation Models"
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
Yogesh31Hasabe/LeetCode-Experiment-with-AI-Extension
LeetCode Experiment with AI Extension
nazmul-me/membership_inference
microsoft/OmniParser
A simple screen parsing tool towards pure vision based GUI agent
bboylyg/BackdoorLLM
BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models
llm-attacks/llm-attacks
Universal and Transferable Attacks on Aligned Language Models
inspire-group/membership-inference-evaluation
Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models
microsoft/coderec_programming_states
Code and Data for: Reading Between the Lines: Modeling User Behavior and Costs in AI-Assisted Programming
iamgroot42/mimir
Python package for measuring memorization in LLMs.
acmsigsoft/submission-checker
Checks the PDFs submitted to a conference, e.g., for formatting violations and double anonymous violations
penghui-yang/awesome-data-poisoning-and-backdoor-attacks
A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)
py-why/dowhy
DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions. DoWhy is based on a unified language for causal inference, combining causal graphical models and potential outcomes frameworks.
salesforce/causalai
Salesforce CausalAI Library: A Fast and Scalable framework for Causal Analysis of Time Series and Tabular Data
SewoongLab/spectre-defense
Defending Against Backdoor Attacks Using Robust Covariance Estimation
goutham7r/backdoors-for-code
SCLBD/BackdoorBench
yuezunli/ISSBA
Invisible Backdoor Attack with Sample-Specific Triggers
salesforce/CodeTF
CodeTF: One-stop Transformer Library for State-of-the-art Code LLM
ALFA-group/adversarial-code-generation
[ICLR 2021] "Generating Adversarial Computer Programs using Optimized Obfuscations" by Shashank Srikant, Sijia Liu, Tamara Mitrovska, Shiyu Chang, Quanfu Fan, Gaoyuan Zhang, and Una-May O'Reilly
bigcode-project/bigcode-evaluation-harness
A framework for the evaluation of autoregressive code generation language models.
evalplus/evalplus
Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024
google-deepmind/codesembench
NWPU-IST/sbrbench
centerforaisafety/Intro_to_ML_Safety
SEEDGuard/SEVAL
Auto-evaluation for SE tasks.
NAIST-SE/DevGPT
This is Research Artifact for DevGPT Dataset
Lyz1213/Backdoored_PPLM
THUYimingLi/backdoor-learning-resources
A list of backdoor learning resources