The FathomNet Model Zoo (FMZ) is a collection of FathomNet-trained machine learning algorithms that can be used by the community.
FathomNet is an open-source image database that can be used to train, test, and validate state-of-the-art artificial intelligence algorithms to help us understand our ocean and its inhabitants. Along with the FathomNet Data Use Policy, users agree to the following terms:
- Acknowledgements - Anyone using FathomNet data for a publication or project acknowledges and references this [forthcoming] publication. If you are sharing your work via a presentation or poster, please include a FathomNet logo on your materials.
- Enrichments - The user shares back with the community by creating how-to videos or workflows that are posted on FathomNet’s Medium or YouTube channels, posting trained models on the FathomNet Model Zoo, contributing training data, and/or providing subject-matter expertise to validate submitted data for the purpose of growing the ecosystem.
- Benevolent Use - The data will only be used in ways that are consistent with the United Nations Sustainable Development Goals.
The FathomNet terms of use extends to the FathomNet Model Zoo unless otherwise indicated.
Object detection models identify and locate objects within an image or video.
Model Name | DOI | Model Class | Habitat | Description | Hugging Face |
---|---|---|---|---|---|
MBARI Monterey Bay Benthic | 10.5281/zenodo.5539915 | YOLOv5 | Benthic | This model was trained on 691 classes using 33,667 localized images from MBARI’s Video Annotation and Reference System (VARS). Note: only a subset of the VARS database is uploaded to FathomNet because of institutional concept embargos. For training, images were split 80/20 train/test. Classes were selected because they are commonly observed concepts (primarily benthic organisms, along with equipment and marine litter or trash) within the Monterey Bay and Submarine Canyon system from 500 to 4000 m deep. Many of these organisms will be seen throughout the entire NE Pacific within the continental slope, shelf, and abyssal regions. We used the PyTorch framework and the yolov5 ‘YOLOv5x’ pretrained checkpoint to train for 28 epochs with a batch size of 18 and image size of 640 pixels. | |
MBARI Monterey Bay Benthic Supercategory | 10.5281/zenodo.5571043 | RetinaNet | Benthic | This is a RetinaNet model fine-tuned from the Detectron2 object detection platform's ResNet backbone to identify 20 benthic supercategories drawn from MBARI's remotely operated vehicle image data collected in Monterey Bay off the coast of Central California. The data is drawn from FathomNet and consists of 32779 images that contain a total of 80683 localizations. The model was trained on an 85/15 train/validation split at the image level. | |
MBARI Midwater Object Detector | 10.5281/zenodo.5942597 | RetinaNet | Midwater | A fine tuned RetinaNet model with a ResNet-50 backbone trained to identify 16 midwater classes. The 29,327 training images were collected in Monterey Bay by two imaging systems developed at the Monterey Bay Aquarium Research Institute. The monochrome and 3-channel color images contain a total of 34,071 localizations that were split into 90/10 train/validation sets. The full set of images will be loaded into FathomNet and a list of persistent URLs will be added to a future version of this repository. | |
AI for the Ocean Fish and Squid Detector | 10.5281/zenodo.7430330 | YOLOv5 | Midwater | A set of nine fine tuned YOLOv5 models to identify 6 midwater classes. The 5,600 training images were collected in Monterey Bay and the surrounding regions of the coastal eastern Pacific. Training and test data are partitioned into domains to examine the effects of distribution shifts on model performance. Partitions were designed to yield similar numbers of annotations for each focal class in each partition. Detailed information and code can be found in the project repo. |