ydwang-good's Stars
hgautam750/SVHN-Resnet50
TropComplique/image-classification-caltech-256
Exploring CNNs and model quantization on Caltech-256 dataset
ermongroup/necst
Neural Joint-Source Channel Coding
pytorch/vision
Datasets, Transforms and Models specific to Computer Vision
shaojiawei07/BottleNetPlusPlus
Code for the paper: "BottleNet++: An End-to-End Approach for Feature Compression in Device-Edge Co-Inference Systems"
yoshitomo-matsubara/head-network-distillation
[IEEE Access] "Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-constrained Edge Computing Systems" and [ACM MobiCom HotEdgeVideo 2019] "Distilled Split Deep Neural Networks for Edge-assisted Real-time Systems"
qub-blesson/ScissionTL
A benchmarking tool for distributing Deep Neural Networks (DNN) in an efficient manner using transfer layers to reduce the data transferred between the distributed DNN
qub-blesson/Scission
A Tool for Maximising Performance of Deep Neural Networks in Edge Computing
hou-yz/pytorch-pruning-2step
2-stage pruning to favor distributed inference (local device compute half of the model, upload the feature for further computing on stronger devices or cloud).
wyc941012/Edge-Intelligence
随着移动云计算和边缘计算的快速发展,以及人工智能的广泛应用,产生了边缘智能(Edge Intelligence)的概念。深度神经网络(例如CNN)已被广泛应用于移动智能应用程序中,但是移动设备有限的存储和计算资源无法满足深度神经网络计算的需求。神经网络压缩与加速技术可以加速神经网络的计算,例如剪枝、量化、卷积核分解等。但是这些技术在实际应用非常复杂,并且可能导致模型精度的下降。在移动云计算或边缘计算中,任务卸载技术可以突破移动终端的资源限制,减轻移动设备的计算负载并提高任务处理效率。通过任务卸载技术优化深度神经网络成为边缘智能研究中的新方向。Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge这篇文章提出了协同推断的**,将深度神经网络进行分区,一部分层在移动端计算,而另一部分在云端计算。根据硬件平台、无线网络以及服务器负载等因素实现动态分区,降低时延以及能耗。本项目给出了边缘智能方面的相关论文,并且给出了一个Python语言实现的卷积神经网络协同推断实验平台。关键词:边缘智能(Edge Intelligence),计算卸载(Computing Offloading),CNN模型分区(CNN Partition),协同推断(Collaborative Inference),移动云计算(Mobile Cloud Computing)
mit-han-lab/tinyml