JohnnyQAQ
I am a student of NATIONAL UNIVERSITY OF DEFENSE TECHNOLOGY, and my research in school focused on deep learning, adversarial samples, and more.
NATIONAL UNIVERSITY OF DEFENSE TECHNOLOGYChina
JohnnyQAQ's Stars
Trustworthy-AI-Group/TransferAttack
TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.
Harry24k/MAIR
Fantastic Robustness Measures: The Secrets of Robust Generalization [NeurIPS 2023]
Harry24k/adversarial-attacks-pytorch
PyTorch implementation of adversarial attacks [torchattacks]
ShiArthur03/ShiArthur03
YuYang0901/EPIC
Not All Poisons are Created Equal: Robust Training against Data Poisoning (ICML 2022)
ucsb-seclab/BullseyePoison
Bullseye Polytope Clean-Label Poisoning Attack
zhuchen03/ConvexPolytopePosioning
ConvexPolytopePosioning
FlouriteJ/PoisonFrogs
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
wronnyhuang/metapoison
Craft poisoned data using MetaPoison
Zhou-Zi7/AI-Security-Resources
This Github repository summarizes a list of research papers on AI security from the four top academic conferences.
thunlp/TAADpapers
Must-read Papers on Textual Adversarial Attack and Defense
nikolajohn/Pattern-Recognition-And-Machine-Learning-
Pattern Recognition And Machine Learning 相关的学习资源
zbezj/HEU_KMS_Activator
RVC-Boss/GPT-SoVITS
1 min voice data can also be used to train a good TTS model! (few shot voice cloning)
KenyonY/openai-forward
🚀 大语言模型高效转发服务 · An efficient forwarding service designed for LLMs. · OpenAI API Reverse Proxy
OpenBMB/XAgent
An Autonomous LLM Agent for Complex Task Solving
langgenius/dify
Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
QwenLM/Qwen
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
ashishpatel26/LLM-Finetuning
LLM Finetuning with peft
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Instruction-Tuning-with-GPT-4/GPT-4-LLM
Instruction Tuning with GPT-4
geekan/MetaGPT
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
datawhalechina/self-llm
《开源大模型食用指南》基于Linux环境快速部署开源大模型,更适合**宝宝的部署教程
LianjiaTech/BELLE
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
0-8-4/miui-auto-tasks
一个自动化完成小米社区任务的脚本
ymcui/Chinese-LLaMA-Alpaca-2
中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
HarderThenHarder/transformers_tasks
⭐️ NLP Algorithms with transformers lib. Supporting Text-Classification, Text-Generation, Information-Extraction, Text-Matching, RLHF, SFT etc.
huggingface/trl
Train transformer language models with reinforcement learning.
InternLM/lmdeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.