/adaboost

Adaboost implementation using bayesian and decision tree classifiers

Primary LanguageJava

Adaboost

We implement two different types of classifier:

  • Naive Bayesian Classifier (NBC)
  • Decision Tree Classifier (DTC)

Usage

  • f The name of the data file
  • m Max number of different values per feature
  • n Number of NBC and DTC
  • p The percentage of the data set to be used for training
  • d The maximum depth for the DTCs [Default: 0 which means maximum possible depth]
  • k Keep losers

If you want to work with the page-blocks dataset, use 10 NBC and 10 DTC(with a max depth of 2) and split the use 80% of the dataset as a training set and the other 20% as a test set, type de following parameters:

-f datasets/page-blocks.txt -m 50 -n 10 0 -p 80 -d 2

Good choice for parameter m

For dataset:

  • "page-blocks" use m = 75
  • "glass" use m = 15
  • "pen-digits" use m = 100
  • "yeast" use m = 100
  • "optdigits" use m = 16