An end-to-end Computer Vision project focused on the topic of Image Segmentation (specifically Semantic Segmentation). Although this project has primarily been built with the LandCover.ai dataset, the project template can be applied to train a model on any semantic segmentation dataset and extract inference outputs from the model in a promptable fashion. Though this is not even close to actual promptable AI, the term is being used here because of a specific functionality that has been integrated here.
The model can be trained on any or all the classes present in the semantic segmentation dataset with the ability to customize the model architecture, optimizer, learning rate, and a lot more parameters directly from the config file, giving it an exciting AutoML aspect. Thereafter while testing, the user can pass the prompt (in the form of the config variable 'test_classes') of the selected classes that the user wants to be present in the masks predicted by the trained model.
For example, suppose the model has been trained on all the 30 classes of the CityScapes dataset and while inferencing, the user only wants the class 'parking' to be present in the predicted mask due to a specific use-case application. Therefore, the user can provide the prompt as 'test_classes = ['parking']' in the config file and get the desired output.
1. Training the model on LandCover.ai dataset with 'train_classes': ['background', 'building', 'woodland', 'water']...
2. Testing the trained model for all the classes used to train the model, i.e. 'test_classes': ['background', 'building', 'woodland', 'water']...
3. Testing the trained model for selective classes as per user input, i.e. 'test_classes': ['background', 'building', 'water']...
- Dataset prerequisite for training:
Before starting to train a model, make sure to download the dataset from LandCover.ai or from kaggle/LandCover.ai, and copy/move over the downloaded directories 'images' and 'masks' to the 'train' directory of the project.
First and foremost, make sure that Docker is installed and working properly in the system.
💡 Check the Dockerfile added in the repository. According the instructions provided in the file, comment and uncomment the mentioned lines to setup the docker image and container either to train or test the model at a time.
- Clone the repository:
git clone https://github.com/souvikmajumder26/Land-Cover-Semantic-Segmentation-PyTorch.git
- Change to the project directory:
cd Land-Cover-Semantic-Segmentation-PyTorch
- Build the image from the Dockerfile:
docker build -t segment_project_image
- Running the docker image in a docker container:
docker run --name segment_container segment_project_image
- Copying the output files from the container directory to local project directory after execution is complete:
docker cp segment_container:/segment_project/models .
docker cp segment_container:/segment_project/logs .
docker cp segment_container:/segment_project/output .
- Tidying up:
docker rm segment_container
docker rmi segment_project_image
If Docker is not installed in the system, follow the below methods to set up and run the project without Docker.
- Clone the repository:
git clone https://github.com/souvikmajumder26/Land-Cover-Semantic-Segmentation-PyTorch.git
- Change to the project directory:
cd Land-Cover-Semantic-Segmentation-PyTorch
- Setting up programming environment to run the project:
- If using an installed conda package manager, i.e. either Anaconda or Miniconda, create the conda environment following the steps mentioned below:
conda create --name <environment-name> python=3.9
conda activate <environment-name>
- If using a directly installed python software, create the virtual environment following the steps mentioned below:
python -m venv <environment-name>
<environment-name>\Scripts\activate
- Install the dependencies:
pip install -r requirements.txt
Running the model training and testing/inferencing scripts from the project directory. It is not necessary to train the model first mandatorily, as a simple trained model has been provided to run the test and check outputs before trying to fine-tune the model.
- Run the model training script:
cd src
python train.py
- Run the model testing/inferencing script:
cd src
python test.py
@misc{Souvik2023,
Author = {Souvik Majumder},
Title = {Land Cover Semantic Segmentation PyTorch},
Year = {2023},
Publisher = {GitHub},
Journal = {GitHub repository},
Howpublished = {\url{https://github.com/souvikmajumder26/Land-Cover-Semantic-Segmentation-PyTorch}}
}
Project is distributed under MIT License
@misc{Iakubovskii:2019,
Author = {Pavel Iakubovskii},
Title = {Segmentation Models Pytorch},
Year = {2019},
Publisher = {GitHub},
Journal = {GitHub repository},
Howpublished = {\url{https://github.com/qubvel/segmentation_models.pytorch}}
}