AndreanPradana's Stars
odubno/gauss-naive-bayes
Gauss Naive Bayes in Python From Scratch.
razuswe/PrimaIndianDiabetesprediction
I used six classification techniques, artificial neural network (ANN), Support Vector Machine (SVM), Decision tree (DT), random forest (RF), Logistics Regression (LR) and Naïve Bayes (NB)
ShaishavJogani/Naive-Bayes-Classfier
Implemantation of Gaussian Naive Bayes Calssifier in Python from scratch. (No advanced library)
vamc-stash/Naive-Bayes
Implements Naive Bayes and Gaussian Naive Bayes Machine learning Classification algorithms from scratch in Python.
deypadma/Prediction-of-Stroke-Events
Stroke is the second leading cause of death worldwide and remains an important health burden both for individuals and for the national healthcare systems. Potentially modifiable risk factors for stroke include hypertension, cardiac disease, diabetes, dysregulation of glucose metabolism, atrial fibrillation, and lifestyle factors. Therefore, the goal of our project is to apply principles of machine learning over large existing data sets to effectively predict stroke based on potentially modifiable risk factors. Then it intended to develop the application to provide a personalized warning based on each user’s level of stroke risk and a lifestyle correction message about the stroke risk factors. In this article, we discuss the symptoms and causes of a stroke and also a machine learning model that predicts the likelihood of a patient having a stroke based on age, BMI, and glucose level for a group of patients. To proceed with the implementation, different datasets were considered from Kaggle. Out of all the existing datasets, an appropriate dataset was collected for model building. After collecting the dataset, the next step lies in preparing the dataset to make the data clearer and easily understood by the machine. This step is called Data pre-processing. This step includes handling missing values, handling imbalanced data and performing label encoding that is specific to this particular dataset. Now that the data is pre-processed, it is ready for model building. For model building, pre-processed datasets along with machine learning algorithms are required. Logistic Regression, Decision Tree Classification algorithm, Random Forest Classification algorithm, K-Nearest Neighbour algorithm, Support Vector Classification, KMeans Clustering Classification and Naïve Bayes Classification algorithm are used. After building seven different models, they are compared using four accuracy metrics namely Accuracy Score, Precision Score, Recall Score, and F1 Score.
baotran306/NaiveBayes
Naive Bayes(Categorical, Gaussian, ClassifyText and Mixed Data)
divyar2630/Predict-Accident-Severity
The aim of the project is to predict the severity of an accident due to various features like road conditions, geographic location, weather conditions and type of vehicles. The prediction is achieved using three models- Naive Bayes, Suport Vector Machine and Neural Network and the three models are compared based on the accuracy score.
AnnaD1992/Multiple-Linear-Regression
https://www.kaggle.com/annadurbanova/multiple-linear-regression-backward-elimination
ArmaanSethi/Naive-Bayes-Cars-Diabetes
Simple Naives Bayes Machine Learning
gerchristko/Gaussian-Naive-Bayes
hafizhaua/stroke-pred-gaussian-naive-bayes
meanwesha1/diabetes_NB
Using Gaussian Naive Bayes Classifier to predict the chances of having diabetes.
Promila88/Machine_learning_for_diabetes_prediction
The main objective of this study was to apply & compare different machine learning models such as KNN Classifier, Support Vector Classifier, LogisticRegression, Decision Tree Classifier, Gaussian Naive Bayes, Random Forest Classifier, Gradient Boosting Classifier on the datasets generated after applying different imputation techniques such as MICE imputation, KNN imputation, Simple imputation by median of each column, deleting null values from all rows (for dealing with missing values) on the original PIMA Indian Diabetes Dataset. Beside this, exploratory data analysis was performed for finding the correlation between different predictors