Pinned Repositories
-
阿里巴巴全球调度算法大赛
2019_SCS_SoftwareTest
6-DOF-Inertial-Odometry
IMU-Based 6-DOF Odometry
AdaptBatch
Basic code for adaptive batch in pytorch
adaptdl
Resource-adaptive cluster scheduler for deep learning training.
AI-Job-Notes
AI算法岗求职攻略(涵盖准备攻略、刷题指南、内推和AI公司清单等资料)
ChemicalProductKnowledgeGraph
smart-home
SomeMatlabPictureTemplate
WeatherCrawler
KxuanZhang's Repositories
KxuanZhang/adaptdl
Resource-adaptive cluster scheduler for deep learning training.
KxuanZhang/Async-HFL
[IoTDI 2023/ML4IoT 2023] Async-HFL: Efficient and Robust Asynchronous Federated Learning in Hierarchical IoT Networks
KxuanZhang/BudgetCL
Code for CVPR paper: Computationally Budgeted Continual Learning: What Does Matter?
KxuanZhang/CarM
Carousel Memory: Rethinking the Design of Episodic Memory for Continual Learning
KxuanZhang/ClusterFL
Repo for MobiSys 2021 paper: "ClusterFL: A Similarity-Aware Federated Learning System for Human Activity Recognition".
KxuanZhang/continual-learning
PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR, BI-R, ER, A-GEM, iCaRL, Generative Classifier) in three different scenarios.
KxuanZhang/continual_learning_papers
Relevant papers in Continual Learning
KxuanZhang/CURE-TSR
CURE-TSR: Challenging Unreal and Real Environments for Traffic Sign Recognition
KxuanZhang/DeepLearningSystem
Deep Learning System core principles introduction.
KxuanZhang/ElasticTrainer
Code for paper "ElasticTrainer: Speeding Up On-Device Training with Runtime Elastic Tensor Selection" (MobiSys'23)
KxuanZhang/FedPCL
[NeurIPS'22 Spotlight] Federated Learning from Pre-Trained Models: A Contrastive Learning Approach
KxuanZhang/FedScale
FedScale is a scalable and extensible open-source federated learning (FL) platform.
KxuanZhang/GreenTrainer
Code for paper "Towards Green AI in Fine-tuning Large Language Models via Adaptive Backpropagation" (ICLR'24)
KxuanZhang/Husformer
This repository contains the source code for our paper: "Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition". For more details, please refer to our paper at https://arxiv.org/abs/2209.15182.
KxuanZhang/LoCon
LoRA for convolution layer
KxuanZhang/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
KxuanZhang/LUMP
Code for the paper "Representational Continuity for Unsupervised Continual Learning" (ICLR 22)
KxuanZhang/mammoth
An Extendible (General) Continual Learning Framework based on Pytorch - official codebase of Dark Experience for General Continual Learning
KxuanZhang/Miro
Miro[ACM MobiCom '23] Cost-effective On-device Continual Learning over Memory Hierarchy with Miro
KxuanZhang/NNCSL-ICCV2023
Official implementation of NNCSL
KxuanZhang/SCALE
[CLVision 2023] SCALE: Online Self-Supervsed Lifelong Learning without Prior Knowledge
KxuanZhang/Semi-supervised-learning
A Unified Semi-Supervised Learning Codebase (NeurIPS'22)
KxuanZhang/SHADE
SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training
KxuanZhang/SlimFit
KxuanZhang/SLNetCode
KxuanZhang/SparCL
SparCL: Sparse Continual Learning on the Edge @ NeurIPS 22
KxuanZhang/SupContrast
PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)
KxuanZhang/SynMotion
KxuanZhang/TFC-pretraining
Self-supervised contrastive learning for time series via time-frequency consistency
KxuanZhang/tiny-training
On-Device Training Under 256KB Memory [NeurIPS'22]