Last updated: 01/12/2021
Table : Comparative Analysis of Classification Accuracy
activation function | loss function | Optimizer | learning rate | learning rate dacay | epoch | total loss | learning time | final accuracy |
---|---|---|---|---|---|---|---|---|
Relu | MSE | Gradient descent | 0.01 | every 50, *=0.96 | 100 | 0.001 | 2h | 98% |
Table : Procedure
index | 1 | 2 | 3 |
---|---|---|---|
1 | layer | normalization | activation function |
2 | normalization | activation function | convolution layer |
Table : Data Augmentation
Method | notation | code |
---|---|---|
Random erase | ||
Cutout | ||
MixUp | ||
CutMix | ||
Style transfer GAN | ||
Mosaic | ||
Random Croping | tf.keras.layers.experimental.preprocessing.RandomCrop() tf.image.random_crop() |
Table : Loss Functin and Optimizer Equation
activation function | activation equation | loss function | loss equation | Optimizer | Optimizer notation |
---|---|---|---|---|---|
Sigmoid | |||||
Relu | MSE | Gradient descent | |||
Leacky Relu | |||||
ELU | |||||
tanh | |||||
maxout |
Table : Weight Initialization
Weight Initialization | notation | code |
---|---|---|
Random Normal | tf.keras.initializers.RandomNormal() |
|
Xavier (=Glorot) | tf.keras.glorot_uniform() |
|
He (for Relu) | tf.keras.initializers.he_uniform() |
Table : Regularization
Regularization | notation | code |
---|---|---|
Dropout | Random Node Turn off | tf.keras.layers.Dropout(rate) (0.0<rate<1.0) |
GaussianDropout | sqrt(rate / (1 - rate)) | tf.keras.layers.GaussianDropout(rate) |
DropBlock (for CNN) | drop range of features | |
Spatial Dropout | tf.keras.layers.SpatialDropout2D() |
|
L1 Regularization | ||
L2 Regularization | ||
Early stoppoing | stop epoch training | tf.keras.callbacks.EarlyStopping() |
Table : Normalization - Internal Covariate Shift Solution