Pinned Repositories
A-Federated-Learning-Method-of-Variational-Autoencoders-for-Collaborative-Filtering-
aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
FL_VAEs
Federated Learning based Variational Autoencoder for Collaborative Filtering
folding_quantization
huins_drone
pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
pytorch-OpCounter
Count the FLOPs of your PyTorch model.
sort
Simple, online, and realtime tracking of multiple objects in a video sequence.
torch2trt
An easy to use PyTorch to TensorRT converter
yj4889-Optimized-Quantization-for-Convolutional-Deep-Neural-Networks-in-Federated-Learning
Federated learning is a distributed learning method that trains a deep network on user devices without collecting data from central server. It is useful when the central server can’t collect data. However, the absence of data on central server means that deep network compression using data is not possible. Deep network compression is very important because it enables inference even on device with low capacity. In this paper, we proposed a new quantization method that significantly reduces FPROPS(floating-point operations per second) in deep networks without leaking user data in federated learning. Quantization parameters are trained by general learning loss, and updated simultaneously with weight. We call this method as OQFL(Optimized Quantization in Federated Learning). OQFL is a method of learning deep networks and quantization while maintaining security in a distributed network environment including edge computing. We introduce the OQFL method and simulate it in various Convolutional deep neural networks. We shows that OQFL is possible in most representative convolutional deep neural network. Surprisingly, OQFL(4bits) can preserve the accuracy of conventional federated learning(32bits) in test dataset.
yj4889's Repositories
yj4889/yj4889-Optimized-Quantization-for-Convolutional-Deep-Neural-Networks-in-Federated-Learning
Federated learning is a distributed learning method that trains a deep network on user devices without collecting data from central server. It is useful when the central server can’t collect data. However, the absence of data on central server means that deep network compression using data is not possible. Deep network compression is very important because it enables inference even on device with low capacity. In this paper, we proposed a new quantization method that significantly reduces FPROPS(floating-point operations per second) in deep networks without leaking user data in federated learning. Quantization parameters are trained by general learning loss, and updated simultaneously with weight. We call this method as OQFL(Optimized Quantization in Federated Learning). OQFL is a method of learning deep networks and quantization while maintaining security in a distributed network environment including edge computing. We introduce the OQFL method and simulate it in various Convolutional deep neural networks. We shows that OQFL is possible in most representative convolutional deep neural network. Surprisingly, OQFL(4bits) can preserve the accuracy of conventional federated learning(32bits) in test dataset.
yj4889/A-Federated-Learning-Method-of-Variational-Autoencoders-for-Collaborative-Filtering-
yj4889/FL_VAEs
Federated Learning based Variational Autoencoder for Collaborative Filtering
yj4889/aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
yj4889/folding_quantization
yj4889/huins_drone
yj4889/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
yj4889/pytorch-OpCounter
Count the FLOPs of your PyTorch model.
yj4889/sort
Simple, online, and realtime tracking of multiple objects in a video sequence.
yj4889/torch2trt
An easy to use PyTorch to TensorRT converter