FedCampus/FedKit

Cross-platform aggregation demo

SichangHe opened this issue · 11 comments

We now have worked around #18 and would allow for training the same model on both Android and iOS.

This demo would use MNIST because that is what the iOS example client app is using, and I anticipate it to be easier to port it to Android than the other way around.

  • Use Flutter to handle downloading MNIST training data files, and make the iOS example work.
  • Connect the Android example.

Flutter will download this:

MNIST_data.zip

Picture

16881695533621_ pic_hd

I got an Android and iOS phone training together, but the iPhone seems to be outputting bogus loss.

Picture

16901695534208_ pic_hd

When I train with the two phone with the same partition ID (same training data), they converged in about two rounds.

Screenshot

image

After changing the code to obtain MLUpdateContext.metrics[.lossValue] as Float instead of Double, I still get bogus losses.

@danielnugraha, any ideas 🙏?

We can exclude the possibility of communication error. I've tested and logged both of them out and they are the same.

Screenshot: 0 training loss, large test loss, and 0.1 test accuracy

image

Screenshot of loss/accuracy.

image

The binary classifier seems to give 100% accuracy out of the box on iOS and training did nothing.

Branch: https://github.com/SichangHe/FedCampus--FedKit/tree/android-mnist

Edit: The accuracy implementation on iOS is argmax so it is wrong.
However, training on Android yielded the same problem: the test loss&accuracy are the same over epochs (training loss did fluctuate).

Edit: Assigning random parameters did not work.

Screenshot

image

The data on iPhone is still crazy after switching to PMData.
Android result is fine, though.

Screenshot

image

Even worse, I think the iPhone eventually managed to make all parameters NaN.