YanghaoZYH's Stars
f/awesome-chatgpt-prompts
This repo includes ChatGPT prompt curation to use ChatGPT better.
ggerganov/llama.cpp
LLM inference in C/C++
xtekky/gpt4free
The official gpt4free repository | various collection of powerful language models
jmorganca/ollama
Get up and running with Llama 2, Mistral, and other large language models locally.
ccfddl/ccf-deadlines
⏰ Collaboratively track deadlines of conferences recommended by CCF (Website, Python Cli, Wechat Applet) / If you find it useful, please star this project, thanks~
AutumnWhj/ChatGPT-wechat-bot
ChatGPT for wechat https://github.com/AutumnWhj/ChatGPT-wechat-bot
hadley/r4ds
R for data science: a book
microsoft/promptbench
A unified evaluation framework for large language models
stanford-crfm/helm
Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110). This framework is also used to evaluate text-to-image models in HEIM (https://arxiv.org/abs/2311.04287) and vision-language models in VHELM (https://arxiv.org/abs/2410.07112).
JDAI-CV/FaceX-Zoo
A PyTorch Toolbox for Face Recognition
greshake/llm-security
New ways of breaking app-integrated LLMs
leondz/garak
LLM vulnerability scanner
yfzhang114/Generalization-Causality
关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative model各式各样研究的阅读笔记
Lionelsy/Conference-Accepted-Paper-List
Some Conferences' accepted paper lists (including AI, ML, Robotic)
ydyjya/Awesome-LLM-Safety
A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights into the safety implications, challenges, and advancements surrounding these powerful models.
RobustBench/robustbench
RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]
laiyer-ai/llm-guard
The Security Toolkit for LLM Interactions
chawins/llm-sp
Papers and resources related to the security and privacy of LLMs 🤖
hackernoon/learn
The place to learn about the top technology, programming, web3, business, media, gaming, data science, finance, and cybersecurity stories from around the internet!
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
Daniel-xsy/RoboBEV
RoboBEV: Towards Robust Bird's Eye View Perception under Common Corruption and Domain Shift
ldkong1205/Robo3D
[ICCV 2023] Robo3D: Towards Robust and Reliable 3D Perception against Corruptions
Verified-Intelligence/auto_LiRPA
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
zhuchen03/FreeLB
Adversarial Training for Natural Language Understanding
hyintell/AAGPT
AAGPT is another experimental open-source application showcasing the capabilities of large language models, such as GPT-3.5 and GPT-4.
rod-trent/OpenAISecurity
Scripts and Content for working with Open AI
Zhenyu-LIAO/RMT4ML
Matlab Notebook for visualizing random matrix theory results and their applications to machine learning
yjhuangcd/local-lipschitz
Official implementation for Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (NeurIPS, 2021).
PKU-ML/CFA
LitterQ/ATLD-pytorch
This is the implementation of the "Improving Model Robustness with Latent Distribution Locally and Globally"