/Compensated-Foreground-Object-Removal-using-Multiview-Images

This project involves the management and processing of image data for the Compensated Foreground Object Removal using Multiview Images project. It includes the creation of necessary directories for storing original images, inference data, model outputs, and stitched images.

Primary LanguageJupyter NotebookGNU General Public License v3.0GPL-3.0

Project: Compensated Foreground Object Removal using Multiview Images

Overview

This project involves the management and processing of image data for the Compensated Foreground Object Removal using Multiview Images project. It includes the creation of necessary directories for storing original images, inference data, model outputs, and stitched images.

Setup

To set up the project, run the migrate.py script. This script will ensure that all necessary directories are created.

Directory Structure

The project automatically creates the following directories if they do not exist:

  • DATA/original_images: Stores the original images.
  • INFERENCE_DATA: Used to store data needed for inference.
  • LABELS/masks_imgs: Contains mask images generated by the model.
  • LABELS/model_outputs: Stores outputs from the model.
  • MODEL_CHECKPOINTS: Used to store model checkpoints during training.
  • STITCHED_IMAGES: Stores images that are stitched together post-processing.

Configuration

The paths for these directories are configured in config.py. Ensure that this file is updated if there are any changes to the directory paths.

Usage

After setting up the directory structure, you can proceed with placing your original images in the DATA/original_images directory and running your model training and inference scripts as per your project requirements.

Here are the key steps:

  1. Image Segmentation: go to segmentation.ipynb and run the codes to get segmentation masks
  2. Perspective Projection: go to perspective_projection.ipynb and run the codes to infill the segmented parts with content from other views.
  3. Image Regeneration: go to regeneration.ipynb and run the codes to inpaint the images, using a deep learning model
  4. Finalize: go to finalizing_images.ipynb and run the codes to upsample the inpainted holes and put them back to the original images.

Contribution

Contributors are welcome to improve the project by submitting pull requests or opening issues for bugs and feature requests.

License

TBD