/faiss_knn

KNN Implementation for FAISS

Primary LanguageJupyter Notebook

{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Name\n",
    "--------------------------------------------------------------------------------\n",
    "KNN CPU/GPU Component using FAISS"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Labels\n",
    "--------------------------------------------------------------------------------\n",
    "KNN, FAISS, GPU, Python"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Summary\n",
    "---------------------------------------------------------------------------\n",
    "\n",
    "### KNN Component using FAISS library\n",
    "\n",
    "Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. Faiss is written in C++ with complete wrappers for Python/numpy\n",
    "\n",
    "This KNN Component is developed with Wrapper using FAISS Python library and uses this [python wrapper API](https://github.com/shankarpm/faiss_knn/blob/master/faiss_knn.py) to train and predict models using FAISS Library. This FAISS Implementation can work in CPU and GPU Instances.\n",
    "\n",
    "Base Image of this FAISS Component uses [plippe/faiss-docker:1.4.0-gpu](https://hub.docker.com/r/plippe/faiss-docker/)\n",
    "\n",
    "More Information on FAISS can be found [here](https://github.com/facebookresearch/faiss)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Intended Use\n",
    "---------------------------------------------------------------------------\n",
    "Anyone who wants to train KNN classification model using FAISS on GPU instance. This model performs relatively better in accuracy and performance on GPU instance than other KNN ML Models like sklearn , sagemaker.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Parameters"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    " | Name| Argument | Description |\n",
    "| :---   | :---:    | :---   |\n",
    "\t|**Data**|--data\t|GCS url path of input data in csv format. Optional: False , Default: None , Type: String|\n",
    "    |**Output Data Dir**|--output-data-dir\t|GCS url path for writing logs and output KPI measurements.Optional: False , Default: None , Type: String|\n",
    "    |Training Test Split Ratio|--training-test-split-ratio\t|Split the training and test data sample Optional: True , Default: 0.9 , Type: float|\n",
    "    |Test Mode|--test-mode\t|Get only accuracy and other kpi value during test mode for unit tests mode only - values - {accuracy} Optional: True  , Type: String|\n",
    "    |**K Value**|--k-value\t|postive value for K in computing K nearest neighbors. Optional: False , Default: None , Type: Integer| \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Input\n",
    "\n",
    "The component expects an input data location from public Google Storage bucket with a csv format dataset containing feature columns and last column as feature label output.\n",
    "\n",
    "Sample Input is available in Google Storage Bucket(gs://gs-public-test-data/covtype.data_1.gz)\n",
    "\n",
    "Comonent also expects a optional split ratio number to split the input data into training and test data set. The default is 0.9\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Output\n",
    "\n",
    "The Component also expects a public Google Storage bucket to save all the checkpoints , logs and KPI metrics of KNN-FAISS component. The Checkpoints and logs are saved in a separate file format (knn-faiss-output-log-20190305-050509.txt) and KPI Metrics are saved in a separate CSV File (kpi_metrics_knn-faiss-log-20190305-050509.csv).\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Use Examples\n",
    "\n",
    "Below are minimalistic examples for a data use of the KNN component.\n",
    "\n",
    "### Docker data pipeline Example\n",
    "\n",
    "All pipeline components are Docker containers and can be tested in a local environment.\n",
    "\n",
    "```bash \n",
    "docker run -it gcr.io/ml-workflow1/knn \\\n",
    "    --data=gs://gs-public-test-data/covtype.data_1.gz \\\n",
    "    --k-value=5 \\\n",
    "    --training-test-split-ratio=0.8 \\\n",
    "    --output-data-dir=gs://gs-public-test-data/Logs\n",
    "```\n",
    "\n",
    "### Kubeflow Pipelines Example\n",
    "\n",
    "This code snippet creates a pipeline file. To run the pipeline, you have to upload the file to an instance of Kubeflow Pipelines using its user interface.\n",
    "\n",
    "```python\n",
    "# !pip3 install https://storage.googleapis.com/ml-pipeline/release/0.1.10/kfp.tar.gz --upgrade\n",
    "import kfp\n",
    "import kfp.dsl as dsl\n",
    "import kfp.gcp as gcp\n",
    "from kfp.components import load_component_from_file\n",
    "\n",
    "KnnOp = load_component_from_file('component.yaml')\n",
    "\n",
    "@dsl.pipeline(\n",
    "    name='Knn',\n",
    "    description='Generated')\n",
    "def knn_pipeline(\n",
    "    data=dsl.PipelineParam(name=\"Data\"),\n",
    "    output_data_dir=dsl.PipelineParam(name=\"Output Data Dir\"),\n",
    "    k_value=dsl.PipelineParam(name=\"K Value\"),\n",
    "    training_test_split_ratio=dsl.PipelineParam(name=\"Training Test Split Ratio\", value=\"0.9\"),\n",
    "    test_mode=dsl.PipelineParam(name=\"Test Mode\", value=\"\")):\n",
    "\n",
    "    knn_op = KnnOp(\n",
    "                data=data,\n",
    "        output_data_dir=output_data_dir,\n",
    "        k_value=k_value,\n",
    "        training_test_split_ratio=training_test_split_ratio,\n",
    "        test_mode=test_mode\n",
    "    ).apply(gcp.use_gcp_secret('user-gcp-sa'))\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    import kfp.compiler as compiler\n",
    "    compiler.Compiler().compile(knn_pipeline, 'pipeline.tar.gz')\n",
    "```\n",
    "\n",
    "### Example assets:\n",
    "\n",
    "- Full featured [pipeline.py](pipeline.py)\n",
    "- Full featured [pipeline.tar.gz](pipeline.tar.gz)\n",
    "- [component.yaml](component.yaml)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Benchmarks\n",
    "Datasets used in the benchmarks are covtype dataset containing data for a multi-class problem with 54 features and 581k datapoints.It’s a labeled dataset where each entry describes a geographic area, and the label is a type of forest cover. There are seven possible labels, and we aim to solve the multi-class classification problem using FAISS-kNN.\n",
    " \n",
    "The below table shows the benchmarks results for KNN implementation using FAISS , SageMaker and SKLearn."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### KNN Comparison with 581k datapoints on Mac PC (8 Core CPU)\n",
    "\n",
    "|. | SkLearn ( K - 5 )| FAISS ( K - 5 ) | SageMaker ( K - 5 ) |\n",
    "|------| ------| --- | --- |\n",
    "|Test DataPoints\t|58k\t|58k\t|58k|\n",
    "|Features\t|54\t|54\t|54|\n",
    "|Accuracy\t|96.90%\t|97.00%\t|93.80%|\n",
    "|Prediction Time (secs)\t|5.73\t|42|\t26|\n",
    "|Training Model Time (in secs)\t|4|\t0.08\t|186|\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### KNN Comparison with 2.1million datapoints on Google Compute Eng. ( 1 CPU / 1 GPU Nvidia Tesla K 80)\n",
    "\n",
    "|. | SkLearn ( K - 5 )| FAISS ( K - 5 ) | SageMaker ( K - 5 ) |\n",
    "|------| ------| --- | --- |\n",
    "|Test DataPoints\t|200k\t|200k\t|200k|\n",
    "|Features\t|54\t|54\t|54|\n",
    "|Accuracy\t|99.20%\t|99.00%\t|94.40%|\n",
    "|Prediction Time (secs)\t|28\t|64|\t17.68|\n",
    "|Training Model Time (in secs)\t|30|\t53.99\t|217|\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### KNN Comparison with 2.1million datapoints on Google Compute Eng. ( 4 CPU / 4 GPU NVIDIA Tesla P100))\n",
    "\n",
    "|. | SkLearn ( K - 5 )| FAISS ( K - 5 ) | SageMaker ( K - 5 ) |\n",
    "|------| ------| --- | --- |\n",
    "|Test DataPoints\t|200k\t|200k\t|200k|\n",
    "|Features\t|54\t|54\t|54|\n",
    "|Accuracy\t|99.20%\t|99.00%\t|90.80%|\n",
    "|Prediction Time (secs)\t|22\t|7|\t17|\n",
    "|Training Model Time (in secs)\t|30|\t51\t|170|"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### KNN Comparison with 2.1million datapoints on Google Compute Eng. ( 8 CPU / 8 GPU NVIDIA Tesla K80)\n",
    "\n",
    "|. | SkLearn ( K - 5 )| FAISS ( K - 5 ) | \n",
    "|------| ------| --- | \n",
    "|Test DataPoints\t|200k\t|200k\t|\n",
    "|Features\t|54\t|54\t|\n",
    "|Accuracy\t|99.20%\t|99.00%\t|\n",
    "|Prediction Time (secs)\t|26\t|13|\t\n",
    "|Training Model Time (in secs)\t|28|\t64\t|"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### KNN Comparison with 2.1million datapoints on Google Compute Eng. ( 8 CPU / 8 GPU NVIDIA Tesla V100)\n",
    "\n",
    "|. | SkLearn ( K - 5 )| FAISS ( K - 5 ) | SageMaker ( K - 5 ) |\n",
    "|------| ------| --- | --- |\n",
    "|Test DataPoints\t|200k\t|200k\t|200k|\n",
    "|Features\t|54\t|54\t|54|\n",
    "|Accuracy\t|99.20%\t|99.00%\t|92.60%|\n",
    "|Prediction Time (secs)\t|23\t|5|\t12|\n",
    "|Training Model Time (in secs)\t|27|163\t|81|"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Notebooks and DataSets:**\n",
    "- Notebook for producing the results [Sagemaker](notebooks/KNN-SageMaker.ipynb),      [https://github.com/shankarpm/faiss_knn/blob/master/KNN-SageMaker.ipynb]\n",
    "- CovType Datasets can be downloaded here ['https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz']\n",
    "\n",
    "**Resources Used:**\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n",
    "1 CPU / 1 GPU Nvidia Tesla K 80\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n",
    "4 CPU / 4 GPU NVIDIA Tesla P100\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n",
    "8 CPU / 8 GPU NVIDIA Tesla K80\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n",
    "8 CPU / 8 GPU NVIDIA Tesla V100\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n",
    "Mac PC - CPU - Intel 8-core - i7\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Analysis and Conclusion: \n",
    "**Accuracy:**  \n",
    "* Accuracy was tested with the same toy dataset with 54 features for all 3 models.With Respect to Accuracy using Inference Time, FAISS and Sklearn always gives the best with very close to each other with more than 99% .For 200k and 2.1 million data points.  \n",
    "    For both the volumes , Accuracy almost remains the same for FAISS and sklearn. FAISS and SKLearn accuracy was around 5-10% better compared to Sagemaker in low and high volumes of data with the same value of KNN parameter ‘K’.  \n",
    "    It is interesting that all these 3 models use different default distance metric to calculate nearest neighbors like sklearn uses Minkowski distance , Not sure If Sagemaker uses cosine distance(although FAISS index can be used) , and FAISS using IndexIVFFlat index.Accuracy remains the same independent of multi-core computing(CPU  or GPU) for all 3 models. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Model Training Time :**  \n",
    "* CPU:  \n",
    "    Based on the benchmark results from 3 Models , We find training time is proportional to the datapoints size. Sklearn is exceptionally fast when tested on CPU compared to FAISS and Sagemaker.\n",
    "For 500k datapoints on CPU, SKlearn takes 4 secs , with FAISS 40 secs and sagemaker 186 seconds.  \n",
    "* GPU:  \n",
    "    Sklearn doesn't utilitze the GPU model with any number of instances unlike FAISS and Sagemaker.\n",
    "FAISS performed 3-4 times faster than Sagemaker on 1 GPU and 4 GPU instances and performed 20% faster on 8 GPU instances with 2.1 million datapoints.\n",
    "P100 Model( 4 GPU) performed the best among all Tesla Models with respect to FAISS model training time of 51 secs."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Inference Time:**\n",
    "* Based on the benchmark results from 3 Models , We find Inference time for FAISS improves significantly from CPU to GPU like 7 minutes to 1 minute for 200k test data points.Similar to Training time, Sklearn doesn't utilize the GPU model with any number of instances unlike FAISS and Sagemaker.  \n",
    "FAISS showed good response from 64 seconds to 7 seconds with 1 GPU to 4 GPU respectively with 200k test data-points.Sagemaker didn't show much significance change with test on multiple GPUS(1,4,8).  \n",
    "Looks FAISS is the clear winner here too.For FAISS , P100 model with 4 GPU gave better results(from 13 seconds to 7 seconds) than K80 with 8 GPU.And V100 model performed the best compared to K80 in 8 GPU model from 13 seconds to 5 seconds."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Conclusion:** \n",
    "* Based on the above benchmark results ,Looks like FAISS is clear winner in all KPI's.  SKlearn is better compared to Sagemaker in accuracy terms but doesn't work in GPU models. So sklearn may not a good candidate for big data sets even though the accuracy is good.SKlearn may be better model for small datasets in CPU.FAISS beats Sagemaker in all areas very significantly. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Interesting Find:**\n",
    "* Training on FAISS model on the first time for a new dataset takes longer time but running on the same dataset subsequently becomes very fast. Not sure if it caches the index somewhere.  For example with 200k test datapoints on 8 core GPU,First time it takes 72 seconds to train the model. Ran the code again with the same parameters , it took almost 18 seconds to train the model.Its almost 4 times faster on the 2nd time.       "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  },
  "toc-autonumbering": false,
  "toc-showcode": true,
  "toc-showmarkdowntxt": true
 },
 "nbformat": 4,
 "nbformat_minor": 2
}