sophryu99/TIL

Neural Network: Multi-layer Perceptron (MLP)

Opened this issue · 1 comments

Multi-layer Perceptron (MLP)

Screen Shot 2021-09-28 at 7 21 39 PM

MLP is Class of feedforward artificial neural network (ANN).

  • Composed of multiple layers of perceptrons.
  • MLP consists of at least three layers of nodes: an input layer, a hidden layer, and an output layer
  • Except for the input nodes, each node is a neuron that uses a nonlinear activation function.
  • Without the activation function, the model can be classified as a logistic regression
  • Utilizes a supervised learning called back-propagation #29

Types of Layers in MLP

  • Input Layer: Input variables, sometimes called the visible layer.
  • Hidden Layers: Layers of nodes between the input and output layers. There may be one or more of these layers.
  • Output Layer: A layer of nodes that produce the output variables.

Terminologies to describe the shape and capability of a neural network

  • Size: The number of nodes in the model.
  • Width: The number of nodes in a specific layer.
  • Depth: The number of layers in a neural network.
  • Capacity: The type or structure of functions that can be learned by a network configuration. Sometimes called “representational capacity“.
  • Architecture: The specific arrangement of the layers and nodes in the network.
  • Batch Size: Number of instances that will be processed for each back propagation run
  • Epoch: One run through the whole dataset
  • Stopping criteria: The error changes less than epsilon
  • Rules of thumb in network design: