- This repo is about implementing normal UNet in the task of semantic segmentation!
-
Okey, let's start with:
git clone https://github.com/manhph2211/Semantic-Segmentation-Pytorch.git
cd Semantic-Segmentation-Pytorch
You'll make your own dataset in this task. But first, cd data
-
First of all, I used
google_images_download
which is a tool for downloading images from google-image. One way to to this is copying folder./google_images_download
in this amazing repo to your folder./data
. -
Then open
create_data.py
,keywords
andlimit
are up to you!. Save and Run it to get images in./download/keywords
. Oh note that if you want to get more than 100 images, you might need to refer this
- In this task, I used this website to label the downloaded images above and then dump them as annotations, note that annotations should be saved in
./data
. Then just following:
mkdir mask
cd ..
python3 utils.py
- One other way to get annotations of our images that I find quite interesting, refer to this
- torch
- torchvision
- Python-opencv
- sklearn
- pycocotools
- matplotlib
- numpy
- pandas
- tqdm
- Just run
python3 train.py
python3 test.py