/AI-Projects

All my projects related to AI - Basic Ml Models To Deep Learning Projects All At One Place

Primary LanguageJupyter Notebook

AI-Projects - My Portfolio

All of my projects in this repo are related to the Artificial Intelligence Field.

Deep Learning Projects

  1. MNIST Classification Project Using ANN & CNN - 99.23% Accuracy
  2. MNIST Using Conv2D Dilation 2 Inputs and 1 output 99.31% accuracy
  3. Food Vision Project with Transfer Learning With ResNetV250 & EfficientNetB0
  4. Food Vision Project Without Transfer Learning Multiclass
  5. Food Vision Project With Feature Extraction & Fine Tuning Using EfficientNetB0
  6. Pizza Steak Classification
  7. ACLImdb Movie Review Sentiment Analysis Using CovNet1D & Bidirectional LSTMs
  8. Cat Vs Dog Classification
  9. Disaster Tweets Sentiment Clasification
  10. PubMed 200k RCT: Sequential Sentence Classification in Medical Abstracts
  11. Timeseries Forecasting Bitcoin Prices Using CovNET, LSTM, N-BEATS
  12. Fashion MNIST Classification Using ANN
  13. Used Car Price Prediction KaggleX Skill Dataset - Used LightGMB, XGBoost And DNN.
  14. Reconstructing MNIST Using Vanilla Autoencoders
  15. Denoising Autoencoders On MNIST dataset
  16. Colorization using Autoencoders on CIFAR10 dataset
  17. Implementation of Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, a paper by Alec Radford, Luke Metz, and Soumith Chintala
  18. Implementation Of Info GAN
  19. Implementation of Least Squares GAN
  20. Implementation of Wassertein GAN

Big Projects

  1. Arxiv34k4l - Multi-label Text Classification Project: Arxiv34k4l is a project aimed at building a multi-label text classification model using natural language processing (NLP) techniques. The project utilizes data sourced from the ArXiv database, which contains a vast collection of academic papers spanning various disciplines. The project's main objective was to develop a model capable of effectively classifying academic papers into multiple categories simultaneously based on their abstracts reducing the workload of human reviewers who are often involved, and automating the process.
  2. Implementation of ResNet(v1)-20 on CIFAR-10 Dataset: In this project, I implemented and trained a ResNet-20 (Residual Network) model on the CIFAR-10 dataset, based on the seminal paper "Deep Residual Learning for Image Recognition" by He et al. The objective was to classify images into 10 distinct classes using the ResNet-20 architecture, which addresses the vanishing gradient problem through the use of residual blocks and shortcut connections. To enhance the model's performance, I incorporated techniques such as data augmentation, batch normalization, and a custom learning rate scheduler. The model achieved an impressive test accuracy of 90.65%, demonstrating the effectiveness of residual learning for image recognition tasks and underscoring the power of deep neural networks in computer vision.
  3. Implementation of DenseNet-BC on CIFAR-10 Dataset: In this project, I implemented and trained a DenseNet-BC (Densely Connected Convolutional Network) model on the CIFAR-10 dataset, based on the influential paper "Densely Connected Convolutional Networks" by Gao Huang et al. The goal was to classify images into 10 classes using the DenseNet-BC architecture, which enhances information flow and mitigates the vanishing gradient problem through dense connectivity and bottleneck layers. Despite computational constraints, the model achieved a test accuracy of 85.72% at 25 epochs out of 300, showcasing the robustness and effectiveness of the DenseNet architecture for image recognition tasks. Training did not happen for the entire epochs, model checkpoints were saved for the analysis. My default setup is DenseNet-BC with dropout but it also allows configurations to be adjusted for data augmentation, compression or bottleneck only. Additionally, one can also adjust various parameters like growth factor, depth, and blocks of the model.
  4. LeNet-5 Implementation: I implemented the LeNet-5 architecture for handwritten digit classification using the MNIST dataset, closely following the seminal paper (Gradient-Based Learning Applied to Document Recognition) by Yann LeCun et al. from the late 1980s. The model features two convolutional layers with tanh activation, followed by max pooling for downsampling, and fully connected layers with tanh and softmax activations. Deviations from the original include adjustments in connection schemes and the use of softmax with categorical cross-entropy instead of radial basis functions and a different MAP-based loss function. The model achieved approximately 98.87% accuracy on the test set, demonstrating the robustness of this classic CNN design.
  5. [mini LLM] Coming Soon. In progress.

Classical Machine Learning Projects

  1. Blue Book for Bulldozers Project - Predict the auction sale price
  2. Customer Personality Analysis - Analysis of company's ideal customers
  3. InVitro Cell Research - Identifying age-related conditions
  4. Heart Disease Indentification
  5. Predict car sales prices
  6. Titanic Surviors Prediction Using Machine Learning
  7. California House Price Prediction
  8. Spaceship Accident - Predict Alternate Dimension Travellers

more coming soon...