This repo focuses on issues noted by me on by DeVries, et al., Deep learning of aftershock patterns following large Earthquakes or via sci-hub. This article has been widely used as a motivation for using deep learning, e.g., Tensorflow 2.0 release notes.
I raised concerns about target leakage and the suitability of the data science approach to both the author and Nature. Nature reviewed my concerns and decided not to act. You can view the detail of this communication in the correspondence folder.
The repo here demonstrates the issues I noted. The repo is a clone of the original analysis. To understand the issues, work through the notebook, Exploratory Analysis. To run these, you will need the data, which is available at on google drive. You may also want to see how the original test/train splits were conducted at DeVries processing repo.
To run the notebook, the code is using Python 3 and you must first download the data and put it in an adjoining folder to the repo.
The notebook has four sections:
- Replicating the results in the paper
- Replicating the results in the paper, but showing the results on both test and train. Puzzingly, the scores for the test set are higher than the train set.
- Replicating similar results using only 1500 rows of data with 2 epochs (The original paper used 4.7 million rows of data).
- One source of potential leakage in how test/train is constructed
I want to thank Lukas Innig and Shubham Cheema for their assistance, as well as all the great data scientists at DataRobot which supported me through this process.
Recently, I found papers by Arnaud Mignan and Marco Broccardo that identify issues in the aftershocks paper, see: One neuron is more informative than a deep neural network for aftershock pattern forecasting, Arxiv and A Deeper Look into ‘Deep Learning of Aftershock Patterns Following Large Earthquakes’: Illustrating First Principles in Neural Network Physical Interpretability - Springer