This tutorial presents an "agile modeling" approach that enables users to build custom classifier systems efficiently for species of interest using transfer learning, audio search, and human-in-the-loop active learning.
Authors (Equal Contribution):
- Jenny Hamer, Google DeepMind, hamer@google.com
- Rob Laber, Google Cloud, roblaber@google.com
- Tom Denton, Google Research, tomdenton@google.com
Originally presented at NeurIPS 2023
View the poster here.
We recommend executing this notebook in a Colab environment to gain access to GPUs and to manage all necessary dependencies.
Estimated time to execute end-to-end: 45 minutes
Please refer to these GitHub instructions to open a pull request via the "fork and pull request" workflow.
Pull requests will be reviewed by members of the Climate Change AI Tutorials team for relevance, accuracy, and conciseness.
Check out the tutorials page on our website for a full list of tutorials demonstrating how AI can be used to tackle problems related to climate change.
Usage of this tutorial is subject to the MIT License.
Hamer, J., Laber, R., & Denton, T. (2023). Agile Modeling for Bioacoustic Monitoring [Tutorial]. In Conference on Neural Information Processing Systems. Climate Change AI. https://doi.org/10.5281/zenodo.11585179
@misc{hamer2023agile,
title={Agile Modeling for Bioacoustic Monitoring},
author={Hamer, Jenny and Laber, Rob and Denton, Tom},
year={2023},
organization={Climate Change AI},
type={Tutorial},
doi={https://doi.org/10.5281/zenodo.11585179},
booktitle={Conference on Neural Information Processing Systems},
howpublished={\url{https://github.com/climatechange-ai-tutorials/bioacoustic-monitoring}}
}