This repository contains the code and documentation for the Neural Networks course homework assignments. Each folder represents a specific paper implementation. Below is a list of Homeworks along with a brief description of their contents.
-
Part 1: Implemented and trained Adaline and Madaline networks on a moon-shaped dataset.
-
Part 2: Implemented a Deep Autoencoder to reduce data dimensions before clustering using the k-means algorithm.
This project implements the methodology described in DAC: Deep Autoencoder-based Clustering, a General Deep Learning Framework of Representation Learning by Si Lu and Ruisi Li. -
Part 3: Knowledge distillation from the teacher to the student network by feeding saved logits to the student network.
-
Part 1: Conducted a comparative analysis between the AlexNet and VGGNet architectures for an Eight-class Emotion Classification system, Incorporated techniques such as Data Augmentation and Fine Tuning to enhance performance.
This project implements the methodology described in CNN-based Facial Affect Analysis on Mobile Devices by Charlie Hewitt and Hatice Gunes. -
Part 2: Developed a CNN model for COVID-19 disease detection based on x-ray image classification.
This project implements the methodology described in An Efficient CNN Model for COVID-19 Disease Detection Based on X-Ray Image Classification by Aijaz Ahmad Reshi and Furqan Rustam.
-
Part 1: Fine-tuning the SAM model for semantic segmentation on waterbody satellite images by freezing the image encoder and prompt encoder parts.
This project implements the methodology described in Segment Anything by Alexander Kirillov and Eric Mintun. -
Part 2: Implemented the Faster R-CNN model to detect fire and draw bounding boxes.
This project implements the methodology described in Analysis of Object Detection Performance Based on Faster R-CNN by Wenze Li.
-
Part 1: Predicting the future value of a stock using different architectures, such as RNN, LSTM, GRU, and Conv-LSTM.
This project implements the methodology described in The Performance of LSTM and BiLSTM in Forecasting Time Series by Sima Siami-Namini and Neda Tavakoli. -
Part 2: Implemented CNN + 2 layer LSTM model for predicting suisaidal thoughts through twitter datas.
This project implements the methodology described in Stacked CNN - LSTM approach for prediction of suicidal ideation on social media by Bhavini Priyamvada and Shruti Singhal.
-
Part 1: Implemented the HuBERT, self-supervised model for speech emotion recognition.
This project implements the methodology described in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu and Benjamin Bolte. -
Part 2: Fine-tuned the large pre-trained transformer model, BERT, and analyzed the impact of different techniques such as freezing and pruning.
This project implements the methodology described in Are Sixteen Heads Really Better than One? by Paul Michel and Omer Levy and What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning by Jaejun Lee and Raphael Tang.
-
Part 1: Implemented Control VAE model.
This project implements the methodology described in ControlVAE: Controllable Variational Autoencoder by Huajie Shao and Shuochao Yao. -
Part 2: Implemented GAN, Wasserstein GAN and self-supervised GAN models on MINST data set.
This project implements the methodology described in Wasserstein GAN by Martin Arjovsky and Soumith Chintala and Self-Supervised GANs via Auxiliary Rotation Loss by Ting Chen and Xiaohua Zhai.
-
Part 1: Fine-tuned the RoBERTa model using Low-Rank Adaptation (LoRA) of large language models.
This project implements the methodology described in LoRA: Low-Rank Adaptation of Large Language Models by HEdward J. Hu and Yelong Shen. -
Part 2: Implemented a CNN model for fraud detection.
This project implements the methodology described in A Convolutional Neural Network Model for Credit Card Fraud Detection by Muhammad Liman Gambo and Anazida Zainal.