/YBI-Foundation

These projects were done as part of YBI foundation's Machine Learning internship

Primary LanguageJupyter Notebook

Servo Motor Performance Prediction

Overview

This project focuses on predicting the performance of servo motors based on various factors such as motor type, screw type, P-gain, and V-gain using machine learning regression models. The dataset used in this project contains observations of servo motor performance metrics.

Table of Contents

Dataset

The dataset (servo.csv) consists of the following columns:-

  • Motor: Type of motor ('A', 'B', 'C', 'D', 'E')
  • Screw: Type of screw ('A', 'B', 'C', 'D', 'E')
  • Pgain: P-gain value (integer)
  • Vgain: V-gain value (integer)
  • Class: Performance class (integer)

Dependencies

This project requires the following dependencies:-

  • Python 3.x
  • pandas
  • numpy
  • scikit-learn
  • matplotlib
  • seaborn

Install these dependencies using pip:

pip install pandas numpy scikit-learn matplotlib seaborn

Installation

To run the project locally, follow these steps:

Clone the repository:

git clone https://github.com/your-username/servo-motor-prediction.git

Navigate to the project directory:

cd servo-motor-prediction

Ensure all dependencies are installed (see Dependencies).

Usage

To use the project, follow these steps:-

  • Ensure you have Python and the required libraries installed.
  • Load the servo.csv dataset.
  • Run the main script or Jupyter notebook for EDA and model training.
  • Explore different models and evaluate their performance.

Exploratory Data Analysis (EDA)

Explore the dataset to understand its structure and relationships between variables. Visualizations such as histograms, scatter plots, and heatmaps are used to gain insights into the data.

Machine Learning Models

Train and evaluate different regression models:-

  • Linear Regression
  • Ridge Regression
  • Decision Tree Regression

Evaluation Metrics

The performance of models is evaluated using Mean Squared Error (MSE), which measures the average squared difference between predicted and actual values. Lower MSE indicates better model performance.

MSE Results:-

  • Linear Regression: 196.13
  • Ridge Regression: 195.34
  • Decision Tree Regression: 282.28

Based on MSE values, linear regression and ridge regression perform similarly and outperform decision tree regression.

Hill and Valley Prediction

Overview

This project aims to predict the occurrence of hills and valleys based on various factors using machine learning regression models. The dataset used in this project contains observations related to terrain profiles and their characteristics.

Table of Contents

Dataset

The dataset (hill_valley.csv) consists of the following columns:-

  • V1, V2, ..., V100: Terrain profile features (numeric)
  • Class: Binary indicator (0 or 1) for hill and valley classification

Dependencies

This project requires the following dependencies:-

  • Python 3.x
  • pandas
  • numpy
  • scikit-learn
  • matplotlib
  • seaborn

Install these dependencies using pip:

pip install pandas numpy scikit-learn matplotlib seaborn

Installation

To run the project locally, follow these steps:

Clone the repository:

git clone https://github.com/your-username/hill-valley-prediction.git

Navigate to the project directory:

cd hill-valley-prediction

Ensure all dependencies are installed (see Dependencies).

Usage

To use the project, follow these steps:-

  • Ensure you have Python and the required libraries installed.
  • Load the hill_valley.csv dataset.
  • Run the main script or Jupyter notebook for EDA and model training.
  • Explore different models and evaluate their performance.

Exploratory Data Analysis (EDA)

Explore the dataset to understand its structure and relationships between variables. Visualizations such as histograms, scatter plots, and heatmaps are used to gain insights into the data.

Machine Learning Models

Train and evaluate different regression models:-

  • Linear Regression
  • Ridge Regression
  • Decision Tree Regression

Evaluation Metrics

The performance of models is evaluated using appropriate metrics like accuracy, precision, recall, and F1-score for binary classification tasks.

  • Linear Regression: Mean Squared Error = 0.30
  • Ridge Regression: Mean Squared Error = 0.23
  • Decision Tree Regression: Mean Squared Error = 0.49

Contributing

Contributions are welcome! If you find any issues or have suggestions for improvement, please create a pull request or open an issue in the GitHub repository.