Below is our solution for the BirdCLEF 2021 - Birdcall Identification.
If you want to reproduce our results, please check share_solution/working
directory.
To overview our solution, please check here.
To put it simply, our solution is composed of the three stage training.
Building melspectrogram classifier (0:nocall, 1:somebird singing) from freefield1010 data. (hereinafter referred to as "nocall detector")
Building melspectrogram multilabel(397dims) classifier to identify which birds are singing in a clip(7sec). Before building it, we make 2nd stage input labels weighted with call probablility.
- training: train_short_audio data
- validation: train_soundscapes data
Candidate extraction from 2nd stage output (five birds extracted per clip(7sec)). The train_metadata & forward/backward frame information are added as features and then classification for each of candidates (0:unlikely 1:likely) is performed by lightgbm.
- training: train_short_audio data
- validation: train_soundscapes data
Make sure you put datasets shown below in the right directory. All of the ipynb files have been confirmed to work in the Kaggle notebook environment. (You can just imitate the same directory structure as Kaggle, like input, working.)
We use the nocall detector for the following two purposes.
- A. To modify 2nd stage input data labels.
- B. To attach labels to 3rd stage input data. At this time, threshold is 0.5 (hard labeling).
Check the code below.
/code
- ./share_solution/working/build_nocall_detector.ipynb
- This notebook is based on the following notebook by yasufuminakama@Kaggle. Please vote for his notebook as well.
- Cassava / resnext50_32x4d starter (training)
- ./share_solution/working/build_nocall_detector.ipynb
/input
/output
- Nocall detector models (Ⅰ) are outputted.
In this stage, we train call probabilities for each birds with no call probability on th e 1st stage.
Check the code below.
/code
/input
- 7sec clip melspectrogram images of
train_short_audio
- generated by kkiller's notebook
- BirdCLEF2020 data
- freefield1010
- nocall detector output for train_short_audio (See Appendix 2.)
- sklearn library (To use StratifiedGroupKfold, we have to install scikit-learn 1.0.dev0)
- 7sec clip melspectrogram images of
/output
- melspectrogram multilabel classifier models (Ⅱ) are outputted.
Check the code below.
/code
/input
- birdclef-2021 (original data)
- melspectrogram multilabel classifier models (Ⅱ)
- modelf for BirdCLEF 2021
- train_short_audio 397dims birdcall probabilities calculated by melspectrogram
- multilabel classifier models (Ⅱ) (See Appendix 3.)
- resnest library
- sklearn library (To use StratifiedGroupKfold, we have to install scikit-learn 1.0.dev0)
output
- submission.csv
Here is a useful code by kneroma@Kaggle (maybe known as kkiller) to perform that.
(https://www.kaggle.com/kneroma/birdclef-mels-computer-public)
/code
/input
- birdclef-2021 (original data)
- 7sec clip melspectrogram images of
train_short_audio
- generated by kkiller's notebook
nocall detector models (Ⅰ)
/output
- inference results for train_short_audio are outputted.
Check the code below.
/code
/input
- birdclef-2021 (original data)
- melspectrogram multilabel classifier models (Ⅰ)
- 7sec clip melspectrogram images of train_short_audio
- generated by kkiller's notebook
- nocall detector output for train_short_audio
- sklearn library (To use StratifiedGroupKfold, we have to install scikit-learn 1.0.dev0)
/output
- 397dims birdcall probabilities for train_short_audio (with some more features)
-
kaggle notebook
-
Google Colab Pro
-
Personally-owned PC
- OS : Ubuntu 18.04.3 LTS
- CPU : Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
- Graphics : GeForce RTX 2080 Ti
- Memory : 64GB
Check the Dockerfile below. This is the same as kaggle notebook environment on 2021/6/13.