For the purpose of our project we have collected our own dataset by scarping from internet and creating the segmentation masks required for the model using an unsupervised segmentation model. The dataset is available at : https://drive.google.com/drive/folders/14wCyJRV5z9f4AVwjEmQYUqAJofcP19qQ?usp=sharing
The structure of the repository is as follows:
data_utils/web_scrape.py
: Contains code for web-scraping given a linkdata_utils/segmentation
: Contains the segmentation code for getting semgentation masks for the scraped images.data_demo.ipynb
: python notebook that shows how data is extracted, segmented and saveddemo_notebook.ipynb
: python notebook with an sample implementation
- BeautifulSoup
- Pytorch
- Selenium
- Tensorflow
- Numpy
- Pillow
- Scipy
- PyCUDA (used in smooth local affine, tested on CUDA 8.0)
Download the data.zip file from the Google drive folder, and unzip its contents. This contains the image data and some other files needed to execute the code.
Create a conda environment with python 3.8.15 and install the dependencies using requirements.txt file. Then run the script using the following command:
python edit_image.py