Pinned Repositories
AI-for-Keras
本系列代码主要是作者Python人工智能之Keras的系列博客,涉及回归神经网络、CNN、RNN、LSTM等内容。基础性代码,希望对您有所帮助。
APIJSON
🚀 零代码、热更新、全自动 ORM 库,后端接口和文档零代码,前端(客户端) 定制返回 JSON 的数据和结构。 🚀 A JSON Transmission Protocol and an ORM Library for automatically providing APIs and Docs.
awesome-notebooks
+100 awesome Jupyter Notebooks templates, organized by tools, published by the Naas community, to kickstart your data projects in minutes. 😎
cpp20-plus-indepth
This is the repo that contains the source code for Cpp20Plus course
data-engineering-zoomcamp
Code for Data Engineer Zoomcamp course
deep-learning-with-python-notebooks
Jupyter notebooks for the code samples of the book "Deep Learning with Python"
django-rest-framework
Web APIs for Django. 🎸
DSP-algorithm
DSP-algorithm
fluent-python-translate
之前翻译的文档遗失,倍感可惜,所以计划花一段时间重新翻译并且分享出来
python-small-tests
python各种项目代码
zky001's Repositories
zky001/python-small-tests
python各种项目代码
zky001/flip_glm
flip test on glm
zky001/LLM_insight
zky001/med_internLM
基于书生+LLava的医疗多模态模型
zky001/PAIR_glm
zky001/sql_LLM_jailbreak
zky001/TAP_glm
zky001/agentscope
Start building LLM-empowered multi-agent applications in an easier way.
zky001/Awesome-Dify-Workflow
分享一些好用的 Dify DSL 工作流程,自用、学习两相宜。 Sharing some Dify workflows.
zky001/Awesome-Jailbreak-on-LLMs
Awesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel, exciting jailbreak methods on LLMs. It contains papers, codes, datasets, evaluations, and analyses.
zky001/Awesome-Privacy-Preserving-LLMs
Collection of all the papers talking about/relevant to the topic of privacy-preserving LLMs
zky001/BadChain
Official Repo of ICLR 24 BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models
zky001/BIPIA
A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.
zky001/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
zky001/DeepSeek-R1
zky001/EasyJailbreak
An easy-to-use Python framework to generate adversarial jailbreak prompts.
zky001/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
zky001/garak
LLM vulnerability scanner
zky001/I-FSJ
Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses
zky001/llm-adaptive-attacks
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]
zky001/LLM-PBE
A toolkit to assess data privacy in LLMs (under development)
zky001/llm.c
LLM training in simple, raw C/CUDA
zky001/llm_dataset_inference
Official Repository for Dataset Inference for LLMs
zky001/LLMs-from-scratch
Implementing a ChatGPT-like LLM in PyTorch from scratch, step by step
zky001/ModelKinship
Exploring Model Kinship for Merging Large Language Models
zky001/Polyrating
zky001/PromptAttack
An LLM can Fool Itself: A Prompt-Based Adversarial Attack (ICLR 2024)
zky001/Robust-Attack-Detectors-LLM
This work addresses the critical issue of evaluating and improving the robustness of LLM-generated security attack detectors using novel approach that integrates RAG and Self-Ranking into the LLM pipeline.
zky001/Tutorial
LLM Tutorial
zky001/xtuner
An efficient, flexible and full-featured toolkit for fine-tuning large models (InternLM, Llama, Baichuan, Qwen, ChatGLM)