adap/flower

Possible to Dynamically Switch Federated Learning Algorithms Based on Performance?

Opened this issue · 1 comments

Question

I am exploring the use of different federated learning algorithms in scenarios where the performance of one might falter due to specific data distributions among clients.

I would like to know if it's currently possible, or if there are any existing strategies within the framework, to dynamically switch from one algorithm (e.g., FedAvg) to another (e.g., FedProx) based on real-time performance metrics.

For instance, starting a training session with FedAvg and then, based on the observed model performance or other metrics, automatically switching to FedProx if the results do not meet certain thresholds.

Specific Queries

  1. Are there existing implementations or tools within the project that facilitate this kind of dynamic algorithm switching?
  2. If not, could you provide insights into the potential challenges or considerations in implementing such a feature?
  3. What are the recommended practices or alternative strategies if dynamic switching is not feasible?

Hi,
Yes, it is possible. To do so, you will need to use low-level API. As a good starting point, I would recommend getting familiar with this example https://github.com/adap/flower/tree/main/examples/app-pytorch. server_low_level.py is crucial. Here is also a migration guide, but it does not go into low-level details: https://flower.ai/docs/framework/how-to-upgrade-to-flower-next.html#upgrade-to-flower-next.

In essence, you can create anything you need in the server by creating a main() function with@app.main()

Specific Queries

  1. Yes, as explained above.
  2. -(not applicable)
  3. -(not applicable)