/spark-FeatureSelection

Featureselection methods as Spark MLlib Pipelines

Primary LanguageScalaApache License 2.0Apache-2.0

Feature Selection for Apache Spark

Different Featureselection methods (3 filters/ 2 selectors based on scores from embedded methods) are provided as Spark MLlib PipelineStages. These are:

Filters:

  1. CorrelationSelector: calculates correlation ("spearman", "pearson"- adjustable through .setCorrelationType) between each feature and label.
  2. GiniSelector: measures impurity difference between before and after a feature value is known.
  3. InfoGainSelector: measures the information gain of a feature with respect to the class.

Embedded:

  1. ImportanceSelector: takes FeatureImportances from any embedded method, e.g. Random Forest.
  2. LRSelector: takes feature weights from (L1) logistic regression. The weights are in a matrix W with dimensions #Labels X #Features. The absolute value is taken from all entries, summed column wise and scaled with the max value.

Util

  1. VectorMerger: takes several VectorColumns (e.g. the result of different feature selection methods) and merges them into one VectorColumn. Unlike the VectorAssembler, VectorMerger uses the metadata of the VectorColumn to remove duplicates. It supports two modes:
    • useFeaturesCol true and featuresCol set: the output column will contain the corresponding column from featuresCol (match by name) that have names appearing in one of the inputCols. Use this, if feature importances were calculated using (e.g.) discretized columns, but selection shall use original values.
    • useFeaturesCol false: the output column will contain the columns from the inputColumns, but dropping duplicates.

Formulas for metrics:

General:

  • - prior probability of feature X having value
  • - cond. probability that a sample is of class , given that feature X has value
  • - prior probability that the label Y has value
  1. Correlation: Calculated through org.apache.spark.mllib.stat

  2. Gini:

  3. Informationgain:

Usage

All selection methods share a common API, similar to ChiSqSelector.

import org.apache.spark.ml.feature.selection.filter._ 
import org.apache.spark.ml.feature.selection.util._
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.ml.Pipeline

val data = Seq(
  (Vectors.dense(0.0, 0.0, 18.0, 1.0), 1.0),
  (Vectors.dense(0.0, 1.0, 12.0, 0.0), 0.0),
  (Vectors.dense(1.0, 0.0, 15.0, 0.1), 0.0)
)

val df = spark.createDataset(data).toDF("features", "label")
  
val igSel = new InfoGainSelector()
             .setFeaturesCol("features")
             .setLabelCol("label")
             .setOutputCol("igSelectedFeatures")
             .setSelectorType("percentile")
           
val corSel = new CorrelationSelector()
               .setFeaturesCol("features")
               .setLabelCol("label")
               .setOutputCol("corrSelectedFeatures")
               .setSelectorType("percentile")
           
val giniSel = new GiniSelector()           
                .setFeaturesCol("features")
                .setLabelCol("label")
                .setOutputCol("giniSelectedFeatures")
                .setSelectorType("percentile")

val merger = new VectorMerger()
              .setInputCols(Array("igSelectedFeatures", "corrSelectedFeatures", "giniSelectedFeatures"))
              .setOutputCol("filtered")
              
val plm = new Pipeline().setStages(Array(igSel, corSel, giniSel, merger)).fit(df)

plm.transform(df).select("filtered").show()