Repo for the manuscript. The paper is available open access. https://esajournals.onlinelibrary.wiley.com/doi/full/10.1002/ecm.1422
The analysis repo is here. https://github.com/timcdlucas/translucent_interpretable_ML_review_analysis/
Machine learning has become popular in ecology but its use has remained restricted to predicting, rather than understanding, the natural world. Many researchers consider machine learning algorithms to be a black box. These models can, however, with careful examination, be used to inform our understanding of the world. They are translucent boxes. Furthermore, the interpretation of these models can be an important step in building confidence in a model or in a specific prediction from a model. Here I review a number of techniques for interpreting machine learning models at the level of the system, the variable, and the individual prediction as well as methods for handling non‐independent data. I also discuss the limits of interpretability for different methods and demonstrate these approaches using a case example of understanding litter sizes in mammals.