Pinned Repositories
Antenna-Arrays
Produces radiation pattern for Broadside, Endfire, Binomial, and Dolph - Chebyshev antenna arrays
Antenna-Arrays-in-3D
Produces radiation pattern for Broadside, Endfire, and Binomial antenna arrays in 3 dimensions
AutotunePIDcontroller
Birthday-Cake
This code bakes a birthday cake for someone special to whom you wanna surprise on his/her birthday
UFLDL_CNN
UFLDL_LinearDecoders
An autoencoder with a sigmoid (or tanh) hidden layer and a linear output layer is Linear Decoders. If one uses PCA whitening, the input is no longer constrained to [0,1], then you go for Linear Decoders
UFLDL_SelfTaughtLearning
In Self-taught learning and Unsupervised feature learning, we will give our algorithms a large amount of unlabeled data with which to learn a good feature representation of the input.
UFLDL_SoftmaxRegression
This model generalizes logistic regression to classification problems where the class label y can take on more than two possible values.
UFLDL_SparseAutoencoder
An unsupervised learning algorithm that applies back propagation, setting the target values to be equal to the inputs. Also, we impose a sparsity constraint on the hidden units, then the autoencoder will still discover interesting structure in the data, even if the number of hidden units is large.
UFLDL_StackedAE
A stacked autoencoder is a neural network consisting of multiple layers of sparse autoencoders in which the outputs of each layer is wired to the inputs of the successive layer.
amolgm's Repositories
amolgm/UFLDL_SparseAutoencoder
An unsupervised learning algorithm that applies back propagation, setting the target values to be equal to the inputs. Also, we impose a sparsity constraint on the hidden units, then the autoencoder will still discover interesting structure in the data, even if the number of hidden units is large.
amolgm/Antenna-Arrays
Produces radiation pattern for Broadside, Endfire, Binomial, and Dolph - Chebyshev antenna arrays
amolgm/UFLDL_SoftmaxRegression
This model generalizes logistic regression to classification problems where the class label y can take on more than two possible values.
amolgm/UFLDL_CNN
amolgm/UFLDL_SelfTaughtLearning
In Self-taught learning and Unsupervised feature learning, we will give our algorithms a large amount of unlabeled data with which to learn a good feature representation of the input.
amolgm/Antenna-Arrays-in-3D
Produces radiation pattern for Broadside, Endfire, and Binomial antenna arrays in 3 dimensions
amolgm/AutotunePIDcontroller
amolgm/UFLDL_StackedAE
A stacked autoencoder is a neural network consisting of multiple layers of sparse autoencoders in which the outputs of each layer is wired to the inputs of the successive layer.
amolgm/Birthday-Cake
This code bakes a birthday cake for someone special to whom you wanna surprise on his/her birthday
amolgm/UFLDL_LinearDecoders
An autoencoder with a sigmoid (or tanh) hidden layer and a linear output layer is Linear Decoders. If one uses PCA whitening, the input is no longer constrained to [0,1], then you go for Linear Decoders
amolgm/UFLDL_PCA-2D
Implementation of PCA, PCA whitening and ZCA whitening in 2D
amolgm/Basic-Phasor-Plot
Drawing Basic phasor diagram This code shows how to draw a basic phasor and use plot windows
amolgm/Error-Correcting-Algorithms
It consists of Viterbi, Trellis, and Cyclic codec
amolgm/ICS-43434-breakout-board
Sample code for the ICS-43434 breakout board and the ESP32
amolgm/intelligent-ebook
This is C code for my ARM7-core based Intelligent ebook.
amolgm/myWhatsAppStickerApp
amolgm/SLSBDU-System-Identification
amolgm/t81_558_deep_learning
Washington University (in St. Louis) Course T81-558: Applications of Deep Neural Networks
amolgm/UFLDL_PCA-and-ZCA-Whitening
Implementation of PCA and ZCA whitening, and applying them to image patches taken from natural images.