train-validation-test
There are 8 repositories under train-validation-test topic.
rochitasundar/Predictive-maintenance-cost-minimization-using-ML-ReneWind
The aim to decrease the maintenance cost of generators used in wind energy production machinery. This is achieved by building various classification models, accounting for class imbalance, and tuning on a user defined cost metric (function of true positives, false positives and false negatives predicted) & productionising the model using pipelines.
abdullah-al-masud/msdlib
This is a custom library for data processing, visualization and machine learning tools.
hurkanugur/Car-Price-Predictor
This project predicts used car prices using a feedforward neural network regression model implemented in PyTorch. Features include car age, mileage, and other attributes. The pipeline supports feature normalization, train/validation/test splitting, and visualization of training and validation loss curves.
jsk1107/coco_utils
Split train/val/test coco dataset and Adjust annotations & categories.
manjugovindarajan/RENEWIND-predictive-maintenance-cost-maintenance-usingML
The aim is to decrease maintenance cost of generators used in wind energy production machinery. This is achieved by building various classification models, accounting for class imbalance, tuning on a user defined cost metric (function of true positives, false positives and false negatives predicted) & productionizing model using pipelines
Wafama/Thesis_Code
Classifying Travel Mode choice in the Netherlands using KNN, XGBoost, RF and TabNet
hurkanugur/Handwritten-Digit-Classifier
This project implements a simple neural network for handwritten digit classification using the MNIST dataset and the Softmax activation function, built with PyTorch. The model is trained to recognize digits (0-9) based on pixel data from 28x28 grayscale images.
hurkanugur/Loan-Approval-Classifier
This project predicts loan approval outcomes (Approved/Rejected) using a PyTorch neural network. It includes data preprocessing, train/validation/test split, model training with BCEWithLogitsLoss, and inference with probability-based classification.