This work "Federated Offline Policy Optimization with Dual Regularization" has been submitted in INFOCOM 2024.
doubly regularized federated offline policy optimization (DRPO), that leverages dual regularization, one based on the local behavioral state-action distribution and the other on the global aggregated policy. Specifically, the first regularization can incorporate conservatism into the local learning policy to ameliorate the effect of extrapolation errors. The second can confine the local policy around the global policy to impede over-conservatism induced by the first regularizer and enhance the utilization of the aggregated information.
- Python == 3.7 (Recommend to use Anaconda or Miniconda)
- PyTorch == 1.8.1
- MuJoCo == 2.3.6
- NVIDIA GPU (RTX A6000) + CUDA 11.1
- Clone repo
git clone https://github.com/HansenHua/DRPO-INFOCOM24.git cd MFPO-Online-Federated-Reinforcement-Learning
- Install dependent packages
pip install -r requirements.txt
Get the usage information of the project
python main.py -h
Test the trained models provided in DRPO-doubly regularized federated offline policy optimization.
python main.py halfcheetah-medium-expert-v2 DRPO test
We provide complete training codes for DRPO.
You could adapt it to your own needs.
```
python main.py halfcheetah-medium-expert-v2 DRPO train
```
```
python main.py halfcheetah-medium-expert-v2 DRPO test
```
Models for testing is not updated now
If you have any question, please email xingyuanhua@bit.edu.cn
.