UCSD-E4E/PyHa

TweetyNET test set first pass

Closed this issue · 2 comments

Before we know for certain that we want to integrate TweetyNET into this pipeline, we definitely want to perform a test on the same test set that we tested Microfaune and BirdNET-Lite on.

Tests: https://github.com/UCSD-E4E/passive-acoustic-biodiversity/blob/master/BinaryBirdDet/xeno-canto_automated_segmentation.ipynb

Relevant dataset: https://drive.google.com/drive/u/0/folders/1Z28g9-iik1LkXXdDbIzq9-d2B0-rCbUj

Mixed_Bird is the correct folder, and the uniform 3s binary labels are the ones I used.

Tested on random sample with 25% of Mixed_Bird data with the following parameters:

image

Notebook used: https://github.com/UCSD-E4E/passive-acoustic-biodiversity/blob/master/BinaryBirdDet/PyHa_Model_Comparison.ipynb

Outputted labels and statistics .csvs: https://drive.google.com/drive/u/1/folders/1cf55zV-gL51l4w4Npo5WymvpiMs7TBdF

# Parameters to define isolation behavior
isolation_parameters_micro = {
    "model" : "microfaune",
    "technique" : "chunk",
    "threshold_type" : "median",
    "threshold_const" : 4.0,
    "threshold_min" : 0.25,
    "window_size" : 2.0,
    "chunk_size" : 3.0
}

isolation_parameters_birdnet = {
   "model" : "birdnet",
   "output_path" : "outputs",
   "min_conf" : 0.25,
   "filetype" : "wav", 
   "num_predictions" : 1,
   "write_to_csv" : True
}

isolation_parameters_tweety = {
   "model" : "tweetynet",
    "tweety_output" : True,
    "chunk_size" : 3.0
}

global_statistics results on full Mixed_Bird dataset
image
image
image

isolation_parameters_birdnet = {
   "model" : "birdnet",
   "output_path" : "outputs",
   "min_conf" : 0.5,
   "filetype" : "wav", 
   "num_predictions" : 1,
   "write_to_csv" : True
}

isolation_parameters_micro = {
    "model" : "microfaune",
    "technique" : "chunk",
    "threshold_type" : "median",
    "threshold_const" : 4.0,
    "threshold_min" : 0.5,
    "window_size" : 2.0,
    "chunk_size" : 3.0
}

isolation_parameters_tweety = {
   "model" : "tweetynet",
    "tweety_output" : True,
    "chunk_size" : 3.0
}

I adapted PyHa to a Google Colab notebook for fun and to save my laptop from freezing, will clean this up later so the output is nicer