/DRPO

Primary LanguagePython

DRPO

This work "Federated Offline Policy Optimization with Dual Regularization" has been submitted in INFOCOM 2024.

📄 Description

doubly regularized federated offline policy optimization (DRPO), that leverages dual regularization, one based on the local behavioral state-action distribution and the other on the global aggregated policy. Specifically, the first regularization can incorporate conservatism into the local learning policy to ameliorate the effect of extrapolation errors. The second can confine the local policy around the global policy to impede over-conservatism induced by the first regularizer and enhance the utilization of the aggregated information.

🔧 Dependencies

Installation

  1. Clone repo
    git clone https://github.com/HansenHua/DRPO-INFOCOM24.git
    cd MFPO-Online-Federated-Reinforcement-Learning
  2. Install dependent packages
    pip install -r requirements.txt
    

⚡ Quick Inference

Get the usage information of the project

python main.py -h

Test the trained models provided in DRPO-doubly regularized federated offline policy optimization.

python main.py halfcheetah-medium-expert-v2 DRPO test

💻 Training

We provide complete training codes for DRPO.
You could adapt it to your own needs.

```
python main.py halfcheetah-medium-expert-v2 DRPO train
```

🏁 Testing

Testing

```
python main.py halfcheetah-medium-expert-v2 DRPO test
```

Open issues:

Models for testing is not updated now

📧 Contact

If you have any question, please email xingyuanhua@bit.edu.cn.