Consideration of network architecture and learning algorithms for unlearning effectiveness
tantrev opened this issue · 2 comments
The unlearning challenge could benefit from accounting for the impact of network architecture and training methods on unlearning performance.
Some neural network architectures and training methods, like recursive cortical networks and gated linear networks/supermasks, could have a huge impact on the way the competition is run and how models are evaluated.
It would be nice if future iterations of the challenge could consider:
- Incorporating architectures/learning algorithms like recursive cortical networks and gated linear networks/supermasks in the starter kit
- Evaluating submissions based on both unlearning performance and the network architecture/learning algorithm used
This could help identify approaches that balance performance and adaptability - crucial for building AI systems that can responsibly adjust to new requirements over time. Studying how architecture and learning algorithms impact unlearnability could drive progress.
Please let me know if you would like me to expand on any part of this feedback or provide more suggestions. I'm happy to discuss ways to improve future iterations of this valuable challenge.
Having information about the training parameters for the re-train model such as number of epochs and optimizer settings is still needed as we have to compare unlearning time with the re-training time to evaluate effectiveness.