This project will employ 3 supervised algorithms, including Random Forest, Gradient Boosting, and XGBoost, to accurately model individuals’ income using the 1994 U.S. Census data. I will then choose the best candidate algorithm from preliminary results and further optimize this algorithm to best model the data. My goal with this implementation is to construct a model that accurately predicts whether an individual makes more than 50,000 dollars. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual’s income can help a non-profit better understand how large of a donation to request or whether or not they should reach out. While it can be difficult to determine an individual’s general income bracket directly from public sources, we can infer this value from other publically available features. The Kaggle competition link is in here(https://www.kaggle.com/c/udacity-mlcharity-competition/overview).
The dataset for this project originates from the UCI Machine Learning Repository. Ron Kohavi and Barry Becker donated the dataset after being published in the article “Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid.” You can find the article by Ron Kohavi online. Problem Statement A charity wants to find out who is likely to donate. As a data scientist, I can utilize this dataset and supervised machine learning algorithms to predict potential donors for the charity to reach out.
- Step 1. Assessing
- Step 2. Preprocessing
- Step 3. Calculating the Performance of a Naive Predictor
- Step 4. Selecting 3 Appropriate Model Candidates
- Step 5. Creating a Training and Predicting Pipeline
- Step 6. Initial Model Evaluation and Picking the Best Model
- Step 7. Model Tuning
- Step 8. Preprocessing the testing data from Kaggle