Unfortunately, we can no longer provide support for this repo. Hopefully, it should still work, but if it doesn't, we cannot really help.
Check collection of public projects 🎁, where you can find multiple Kaggle competitions with code, experiments and outputs.
Poster that summarizes our project is available here.
Open solution to the CrowdAI Mapping Challenge competition.
- Check live preview of our work on public projects page: Mapping Challenge 📈.
- Source code and issues are publicly available.
0.943
Average Precision 🚀
0.954
Average Recall 🚀
No cherry-picking here, I promise 😉. The results exceded our expectations. The output from the network is so good that not a lot of morphological shenanigans is needed. Happy days:)
Average Precision and Average Recall were calculated on stage 1 data using pycocotools. Check this blog post for average precision explanation.
In this open source solution you will find references to the neptune.ai. It is free platform for community Users, which we use daily to keep track of our experiments. Please note that using neptune.ai is not necessary to proceed with this solution. You may run it as plain Python script 😉.
Check REPRODUCE_RESULTS
- Overlay binary masks for each image is produced (code 💻).
- Distances to the two closest objects are calculated creating the distance map that is used for weighing (code 💻).
- Size masks for each image is produced (code 💻).
- Dropped small masks on the edges (code 💻).
- We load training and validation data in batches: using torch.utils.data.Dataset and torch.utils.data.DataLoader makes it easy and clean (code 💻).
- Only some basic augmentations (due to speed constraints) from the imgaug package are applied to images (code 💻).
- Image is resized before feeding it to the network. Surprisingly this worked better than cropping (code 💻 and config 📑).
- Ground truth masks are prepared by first eroding them per mask creating non overlapping masks and only after that the distances are calculated (code 💻).
- Dilated small objects to increase the signal (code 💻).
- Network is fed with random crops (code 💻 and config 📑).
- Ground truth masks for overlapping contours (DSB-2018 winners approach).
- Unet with Resnet34, Resnet101 and Resnet152 as an encoder where Resnet101 gave us the best results. This approach is explained in the TernausNetV2 paper (our code 💻 and config 📑). Also take a look at our parametrizable implementation of the U-Net.
- Network architecture based on dilated convolutions described in this paper.
- Unet with contextual blocks explained in this paper.
- Distance weighted cross entropy explained in the famous U-Net paper (our code 💻 and config 📑).
- Using linear combination of soft dice and distance weighted cross entropy (code 💻 and config 📑).
- Adding component weighted by building size (smaller buildings has greater weight) to the weighted cross entropy that penalizes misclassification on pixels belonging to the small objects (code 💻).
For both weights: the darker the color the higher value.
- distance weights: high values corresponds to pixels between buildings.
- size weights: high values denotes small buildings (the smaller the building the darker the color). Note that no-building is fixed to black.
- Use pretrained models!
- Our multistage training procedure:
- train on a 50000 examples subset of the dataset with
lr=0.0001
anddice_weight=0.5
- train on a full dataset with
lr=0.0001
anddice_weight=0.5
- train with smaller
lr=0.00001
anddice_weight=0.5
- increase dice weight to
dice_weight=5.0
to make results smoother
- train on a 50000 examples subset of the dataset with
- Multi-GPU training
- Use very simple augmentations
The entire configuration can be tweaked from the config file 📑.
- Set different learning rates to different layers.
- Use cyclic optimizers.
- Use warm start optimizers.
- Test time augmentation (tta). Make predictions on image rotations (90-180-270 degrees) and flips (up-down, left-right) and take geometric mean on the predictions (code 💻 and config 📑).
- Simple morphological operations. At the beginning we used erosion followed by labeling and per label dilation with structure elements chosed by cross-validation. As the models got better, erosion was removed and very small dilation was the only one showing improvements (code 💻).
- Scoring objects. In the beginning we simply used score
1.0
for every object which was a huge mistake. Changing that to average probability over the object region improved results. What improved scores even more was weighing those probabilities with the object size (code 💻). - Second level model. We tried Light-GBM and Random Forest trained on U-Net outputs and features calculated during postprocessing.
- Test time augmentations by using colors (config 📑).
- Inference on reflection-padded images was not a way to go. What worked better (but not for the very best models) was replication padding where border pixel value was replicated for all the padded regions (code 💻).
- Conditional Random Fields. It was so slow that we didn't check it for the best models (code 💻).
- Ensembling
- Recurrent neural networks for postprocessing (instead of our current approach)
Model weights for the winning solution are available here
You can use those weights and run the pipeline as explained in REPRODUCE_RESULTS.
There are several ways to seek help:
- crowdai discussion.
- You can submit an issue directly in this repo.
- Join us on Gitter.
- Check CONTRIBUTING for more information.
- Check issues to check if there is something you would like to contribute to.