Quick, straightforward, and easy-to-use Go implementations of the Perceptron and Averaged Perceptron binary classifiers.
While in your project directory:
$ git clone https://github.com/haydenhigg/percy
Then, import it as:
import "./percy"
Regularize(arr []float64) []float64
: Regularizes the L2 norm ofarr
to 1.RegularizeAll(mat [][]float64) [][]float64
: Regularizes the L2 norm of every row ofmat
to 1.
NewModel(weights []float64, bias float64) Model
: Creates a model with inputweights
and an inputbias
.
Train(inputs [][]float64, outputs []float64, iters int, alpha float64) Model
: Trains a Perceptron classifier on the traininginputs
matrix andoutputs
vector foriters
iterations with a learning rate ofalpha
. Returns the final model (see below).TrainFromModel(init Model, inputs [][]float64, outputs []float64, iters int, alpha float64) Model
: The same asTrain
, but initializes the model toinit
rather than a model with weights as a zero-vector and a bias of zero.TrainAveraged(inputs [][]float64, outputs []float64, iters int, alpha float64) Model
: The same asTrain
, but trains an Averaged Perceptron classifier instead.TrainAveragedFromModel(init Model, inputs [][]float64, outputs []float64, iters int, alpha float64) Model
: The same asTrainFromWeights
, but trains an Averaged Perceptron classifier instead.
(mdl Model) Predict(x []float64) float64
: Returns the predicted output (which will be {-1, 1}) for the modelmdl
and the input vectorx
.(mdl Model) RawPredict(x []float64) float64
: Returns the output before being binarized to {-1, 1}.
The Model
is just a struct containing the fields Weights
([]float64) and Bias
(float64).
package main
import (
"fmt"
"./percy"
)
func main() {
inputs := [][]float64{[]float64{...}, []float64{...}, ...}
outputs := []float64{1, -1, ...}
iters := 200
learningRate := 0.01
trainedModel := percy.Train(percy.RegularizeAll(inputs), outputs, iters, learningRate)
fmt.Println(trainedModel.Predict(percy.Regularize([]float64{...})))
}
- Assumptions are not checked by this implementation. For example, if each vector of the
inputs
matrix does not have the same length, this will fail; ifinputs
andoutputs
are different lengths, this will fail; iflearningRate
is a negative number, the algorithm will not converge; etc. - Though not necessary, it may be helpful to
- shuffle the data before training, especially when using the standard Perceptron
- regularize the norms of all training inputs and of all inputs to be predicted (see
Regularize
andRegularizeAll
) - initialize weights to small random values rather than 0s