Pinned Repositories
AFLDDPG
* Wu Q, Wang S, Fan P, et al. Deep Reinforcement Learning Based Vehicle Selection for Asynchronous Federated Learning Enabled Vehicular Edge Computing[J]. arXiv preprint arXiv:2304.02832, 2023. 链接: https://arxiv.org/abs/2304.02832 代码: https://github.com/qiongwu86/AFLDDPG
DRL-Based-Long-Term-Resource-Planning
Paper publised in TNSM entitled "DRL-Based Long-Term Resource Planning for Task Offloading Policies in Multi-Server Edge Computing Networks"
DRL-for-edge-computing
DRL-MEC
Dynamic Task Software Caching-Assisted Computation Offloading for Multi-Access Edge Computing
DRL-TOBS
Codes for the paper titled Online Joint Task Offloading and Resource Management in Heterogeneous Mobile Edge Environments.
drone-network-onos-and-mininet
[12] Prioritization Based Task Offloading in UAV-Assisted Edge Networks 作者:Kalinagac O, Gür G, Alagöz F. 出处:Sensors, 2023 摘要:在流量激增、覆盖问题和低延迟要求等苛刻的操作条件下,地面网络可能无法为用户和应用程序提供预期的服务水平。此外,当自然灾害或自然灾害发生时,现有网络基础设施可能崩溃,给服务区域的应急通信带来巨大挑战。为了在瞬态高服务负载情况下提供无线连接并促进容量提升,需要替代或辅助快速部署网络。由于其高机动性和灵活性,无人驾驶飞行器(UAV)网络非常适合此类需求。在这项工作中,我们考虑由配备无线接入点的无人机组成的边缘网络。这些软件定义的网络节点在边缘到云
edge-offloading
computation offloading in mobile edge computing using Reinforcement Learning
Game-Theoretic-Deep-Reinforcement-Learning
Code of Paper "Joint Task Offloading and Resource Optimization in NOMA-based Vehicular Edge Computing: A Game-Theoretic DRL Approach", JSA 2022.
Graph-reinforcement-learning-literature
This open source library is available to summarize several years of research papers on graph reinforcement learning for the convenience of researchers
PeerJ-Computer-Science
yyds-xtt's Repositories
yyds-xtt/PeerJ-Computer-Science
yyds-xtt/DRL-MEC
Dynamic Task Software Caching-Assisted Computation Offloading for Multi-Access Edge Computing
yyds-xtt/DRL-for-edge-computing
yyds-xtt/drone-network-onos-and-mininet
[12] Prioritization Based Task Offloading in UAV-Assisted Edge Networks 作者:Kalinagac O, Gür G, Alagöz F. 出处:Sensors, 2023 摘要:在流量激增、覆盖问题和低延迟要求等苛刻的操作条件下,地面网络可能无法为用户和应用程序提供预期的服务水平。此外,当自然灾害或自然灾害发生时,现有网络基础设施可能崩溃,给服务区域的应急通信带来巨大挑战。为了在瞬态高服务负载情况下提供无线连接并促进容量提升,需要替代或辅助快速部署网络。由于其高机动性和灵活性,无人驾驶飞行器(UAV)网络非常适合此类需求。在这项工作中,我们考虑由配备无线接入点的无人机组成的边缘网络。这些软件定义的网络节点在边缘到云
yyds-xtt/AFLDDPG
* Wu Q, Wang S, Fan P, et al. Deep Reinforcement Learning Based Vehicle Selection for Asynchronous Federated Learning Enabled Vehicular Edge Computing[J]. arXiv preprint arXiv:2304.02832, 2023. 链接: https://arxiv.org/abs/2304.02832 代码: https://github.com/qiongwu86/AFLDDPG
yyds-xtt/Edge-Caching-Based-on-Multi-Agent-Deep-Reinforcement-Learning-and-Federated-Learning
yyds-xtt/JODRL-PP
Repository for 'Privacy-Preserving Offloading Scheme in Multi-Access Edge Computing Based on MADRL'
yyds-xtt/MIMO-D2D
[4] A Power Allocation Scheme for MIMO-NOMA and D2D Vehicular Edge Computing Based on Decentralized DRL 作者:Long D, Wu Q, Fan Q, et al. 出处:Sensors 摘要:在车辆边缘计算(VEC)中,一些任务可以在本地或在基站(BS)或附近车辆的移动边缘计算(MEC)服务器上处理。事实上,任务是否卸载取决于车辆对基础设施(V2I)和车辆对车辆(V2V)通信的状态。在本文中,考虑了基于设备到设备(D2D)的V2V通信和基于多输入多输出和非正交多址(MIMO-NOMA)的V2I通信。在实际通信场景中,基于MIMO-NOMA的V2I通信信道条件不确定,任务到达随机,导致VE
yyds-xtt/UCMEC-mmWave-Fronthaul
Simulation code of our paper ''Towards Decentralized Task Offloading and Resource Allocation in User-Centric Mobile Edge Computing''
yyds-xtt/DEAT
Achieving Fast Environment Adaptation of DRL-Based Computation Offloading in Mobile Edge Computing
yyds-xtt/QoE_Offloading_DRL
QoE-Driven Task Offloading in Mobile Edge Computing with Deep Reinforcement Learning
yyds-xtt/Chengdu_BSs
The location of the base station in the city of Chengdu
yyds-xtt/Chengdu_Taxi_Track
Chengdu Taxi Track
yyds-xtt/DVRPSR_PPO
DRL for Dynamic Vehicle Routing Problem with stochastic customer requests
yyds-xtt/Edge-Computing-and-Caching-Optimization-based-on-PPO-for-Task-Offloading-in-RSU-assisted-IoV
yyds-xtt/HADRL
Deep Reinforcement Learning for UAV Routing in The Presence of Multiple Charging Stations
yyds-xtt/MaDRLAM
Multi-Agent Deep Reinforcement Learning for Task Offloading in GDMSs
yyds-xtt/ML-RL-simulations
A multi-layer guided reinforcement learning-based tasks offloading in edge computing - Simulations
yyds-xtt/MTFNN-CO
Official TensorFlow implementation for the paper "Computation Offloading in Multi-Access Edge Computing: A Multi-Task Learning Approach" and "A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical Computation Offloading"
yyds-xtt/Rome_MVs
The mobile vehicles (MVs) trajectories in the Rome city.
yyds-xtt/Deep-Reinforcment-Learning
This is the repository for the deep reinforcement learning in classic and novel wireless communication scnarios.
yyds-xtt/edge_simulation1
Designing a Deep Q-Learning Model with Edge-Level Training for Multi-Level Task Offloading in Edge Computing Networks
yyds-xtt/STMRL
A spatial-temporal multi-agent reinforcement learning framework (STMRL) to perform distributed decision-making in multi-edge empowered computation offloading systems
yyds-xtt/UCB_MARL
The simulation codes of a provably efficient multi-agent reinforcement learning algorithm with a near-optimal regret bound in industrail data collection.
yyds-xtt/cc-
yyds-xtt/MDVO
Mean-field reinforcement learning for decentralized task offloading in vehicular edge computing
yyds-xtt/mec_morl_multipolicy
The python code for paper "Multi-objective Deep Reinforcement Learning for Mobile Edge Computing"
yyds-xtt/morl-baselines
Multi-Objective Reinforcement Learning algorithms implementations.
yyds-xtt/Safe-Policy-Optimization
This is a benchmark repository for safe reinforcement learning algorithms
yyds-xtt/UCMEC_env
Deep Reinforcement Learning Environments for User-Centric Mobile Edge Computing