Pinned Repositories
self-speculative-decoding
Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**
CHASe
Code associated with the paper **CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning**
csmath-2019
This mathematics course is taught for the first year Ph.D. students of computer science and related areas @ZJU
FedAttack
Source code of FedAttack.
helm
Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110).
lisa
LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning
LLM-Pruner
[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support LLaMA, Llama-2, BLOOM, Vicuna, Baichuan, etc.
LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
NoisyFL
ICDE
PIECK
Code associated with the paper **Preventing the Popular Item Embedding Based Attack in Federated Recommendations**, at ICDE 2024
junzhang-zj's Repositories
junzhang-zj/PIECK
Code associated with the paper **Preventing the Popular Item Embedding Based Attack in Federated Recommendations**, at ICDE 2024
junzhang-zj/CHASe
Code associated with the paper **CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning**
junzhang-zj/csmath-2019
This mathematics course is taught for the first year Ph.D. students of computer science and related areas @ZJU
junzhang-zj/FedAttack
Source code of FedAttack.
junzhang-zj/helm
Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110).
junzhang-zj/lisa
LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning
junzhang-zj/LLM-Pruner
[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support LLaMA, Llama-2, BLOOM, Vicuna, Baichuan, etc.
junzhang-zj/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
junzhang-zj/NoisyFL
ICDE
junzhang-zj/TAaMR
Targeted Adversarial Attack against Multimedia Recommender Systems (TAaMR) at DSML2020
junzhang-zj/TriForce
TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding
junzhang-zj/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs