/test_repo

For testinhg purposes

MIT LicenseMIT

Issues MIT License


Logo

Exploring Blind Preprocessing to Improve Viola-Jones

We are interested in studying the impact of applying various pre-processing methods on the detection efficiency of Viola-Jones detector. The pre-processing has to be blind so to be applied in all scenarios and still we do get better performance.
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgments

About The Project

Product Name Screen Shot

Here's a blank template to get started: To avoid retyping too much info. Do a search and replace with your text editor for the following: Bhartendu-Kumar, DIP, twitter_handle, linkedin_username, email, email_client, project_title, project_description

(back to top)

Built With

(back to top)

Getting Started

This is an example of how you may give instructions on setting up your project locally. To get a local copy up and running follow these simple example steps.

Prerequisites

This is an example of how to list things you need to use the software and how to install them.

  • npm
    npm install npm@latest -g

Installation

  1. Get a free API Key at https://example.com
  2. Clone the repo
    git clone https://github.com/Bhartendu-Kumar/DIP.git
  3. Install NPM packages
    npm install
  4. Enter your API in config.js
    const API_KEY = 'ENTER YOUR API';

(back to top)

Usage

Use this space to show useful examples of how a project can be used. Additional screenshots, code examples and demos work well in this space. You may also link to more resources.

For more examples, please refer to the Documentation

(back to top)

Roadmap

  • [] Feature 1
  • [] Feature 2
  • [] Feature 3
    • [] Nested Feature

See the open issues for a full list of proposed features (and known issues).

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Contact

Your Name - @twitter_handle - email@email_client.com

Project Link: https://github.com/Bhartendu-Kumar/DIP

(back to top)

Acknowledgments

(back to top)

DIP

DIP project

DataSets:

  1. MIT CBCL
  2. BioID
  3. Yale
  4. Caltech
  5. SoF
  6. Orl
  7. Non - Face Dataset (Collected Heterogeneously)

Pre-Processing Methods Studied:

  • Low Pass Filters:
  1. gaussian blur
  2. bilateral filter
  • Deconvolutions
  1. Blind Deconvolution (richardson lucy)
  2. tv_chambolle
  3. BMD (partially implemented , not converging yet)
  • Retinex
  1. SSR
  2. MSR
  3. retinex FM
  4. NMBM
  • Normalization
  1. HOMO
  • Histogram Manipulation
  1. CLAHE
  2. HE
  3. log intensity stretch
  4. full scale intensity stretch

The pipeline is as:

Step 1: Download the DataSets (from their owner sites) and use the scripts in the dataset_helper_scripts directory to convert all to greyscale and change extensions like (.gif and .pgm) to common extensions. Also arrange all images of a dataset in a single folder using these scripts.

Step 2: Arrange the data directory like shown has 2 subdirectories: faces non-faces These further have their particular sub-directories with name of the DATASET

----! Important, for illuastration "data" folder has the desired structure, just that it has few images

Step 3: Run all the "driver scripts" in the root directory of this repository. This will create an "output" directory with the result of all preprocessing methods.

Step 4: Use "csv_scripts" directory to get the performance metrics from the "output" directory

Step 5: "analysis" directory has the final performance csv files of each pre-procesing method

Step 6: Published report directory has the final findings in form of csv file.