- Table of Contents
β οΈ Frameworks and Libraries- π Datasets
- π Data Preprocessing
- π Prerequisites
- π Installation
- π‘ How to Run
- π Directory Tree
- π Results
- π And it's done!
- π Citation
- β€οΈ Owner
- π License
- Sklearn: Simple and efficient tools for predictive data analysis
- Matplotlib : Matplotlib is a comprehensive library for creating static, animated, and interactive visualizations in Python.
- Numpy: Caffe-based Single Shot-Multibox Detector (SSD) model used to detect faces
- Pandas: pandas is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool, built on top of the Python programming language.
- Seaborn: pandas is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool, built on top of the Python programming language.
- Pickle: The Pickle module implements binary protocols for serializing and de-serializing a Python object structure.
The Dataset is available in this repository. Clone it and use it.
<br/>
Data pre-processing is an important step for the creation of a machine learning model. Initially, data may not be clean or in the required format for the model which can cause isleading outcomes. In pre-processing of data, we transform data into our required format. It is used to deal with noises, duplicates, and missing values of the dataset. Data pre- rocessing has the activities like importing datasets, splitting datasets, attribute scaling, etc. Preprocessing of data is required for improving the accuracy of the model.
All the dependencies and required libraries are included in the file <code>
requirements.txt </code>
See here
The Code is written in Python 3.7. If you donβt have Python installed you can find it here. If you are using a lower version of Python you can upgrade using the pip package, ensuring you have the latest version of pip. To install the required packages and libraries, run this command in the project directory after cloning the repository:
- Clone the repo
git clone https://github.com/Chaganti-Reddy/Kelly-Betting.git
- Change your directory to the cloned repo
cd Kelly-Betting
- Now, run the following command in your Terminal/Command Prompt to install the libraries required
python3 -m virtualenv kelly_b
source kelly_b/bin/activate
pip3 install -r requirements.txt
- Open terminal. Go into the cloned project directory and type the following command:
cd Deploy
python3 main.py
.
βββ assets
β βββ 1.png
β βββ 2.png
β βββ GD.png
β βββ GS.png
β βββ main.jpg
β βββ outcome.png
βββ Book1.twb
βββ Data
β βββ code4.ipynb
β βββ test_data.csv
β βββ train_data.csv
βββ Deploy
β βββ app.yaml
β βββ main.py
β βββ model_prepped_dataset.csv
β βββ model_prepped_dataset_modified.csv
β βββ requirements.txt
β βββ static
β β βββ odds_distribution.png
β β βββ probability_distribution.png
β βββ templates
β βββ index.html
β βββ prediction1.html
β βββ prediction2.html
β βββ prediction3.html
βββ goal_difference_prediction
β βββ AdaBoost.ipynb
β βββ code2.ipynb
β βββ comparison.ipynb
β βββ data_prep.ipynb
β βββ dataset2.csv
β βββ DicisionTree.ipynb
β βββ final_data.csv
β βββ GaussianNB.ipynb
β βββ KNeighbors.ipynb
β βββ model_prepped_dataset.csv
β βββ model_prepped_dataset.json
β βββ odds_kelly.ipynb
β βββ RandomForest.ipynb
β βββ SVC.ipynb
β βββ test_data.csv
β βββ train_data.csv
β βββ XGBClassifier.ipynb
βββ goal_difference_prediction2
β βββ AdaBoost.ipynb
β βββ code2.ipynb
β βββ comparison.ipynb
β βββ data_prep.ipynb
β βββ dataset2.csv
β βββ DecisionTree.ipynb
β βββ final_data.csv
β βββ GaussianNB.ipynb
β βββ KNeighbors.ipynb
β βββ model_prepped_dataset.csv
β βββ model_prepped_dataset.json
β βββ odds_kelly.ipynb
β βββ RandomForest.ipynb
β βββ test_data.csv
β βββ train_data.csv
βββ goal_prediction
β βββ AdaBoost.ipynb
β βββ code3.ipynb
β βββ comparison.ipynb
β βββ data_analytics.ipynb
β βββ data_prep.ipynb
β βββ dataset3.csv
β βββ DecisionTree.ipynb
β βββ final_data.csv
β βββ GaussianNB.ipynb
β βββ KNeighbors.ipynb
β βββ LogisticRegression.ipynb
β βββ model_prepped_dataset.csv
β βββ model_prepped_dataset.json
β βββ RandomForest.ipynb
β βββ SVC.ipynb
β βββ test_data.csv
β βββ train_data.csv
β βββ XGBClassifier.ipynb
βββ k2148344_dissretation_draft.docx
βββ model_prepped_dataset.csv
βββ model_prepped_dataset.json
βββ model_prepped_dataset_modified.csv
βββ outcome_prediction
β βββ AdaBoostClassifier.ipynb
β βββ code1.ipynb
β βββ comparison.ipynb
β βββ data_prep.ipynb
β βββ dataset1.csv
β βββ DecisionTree.ipynb
β βββ final_data.csv
β βββ GaussianNB.ipynb
β βββ KNeighborsClassifier.ipynb
β βββ LogisticRegression.ipynb
β βββ model_prepped_dataset.csv
β βββ model_prepped_dataset.json
β βββ odds_kelly.ipynb
β βββ svc.ipynb
β βββ test_data.csv
β βββ train_data.csv
β βββ XGBClassifier.ipynb
βββ requirements.txt
βββ Team Ranking
β βββ code.ipynb
β βββ data.csv
β βββ model_prepped_dataset.csv
β βββ team_ranking_analysis.ipynb
βββ temp.ipynb
βββ Total Goal Prediction
βββ code3.ipynb
βββ comparison.ipynb
βββ data_analytics.ipynb
βββ data_prep.ipynb
βββ dataset3.csv
βββ final_data.csv
βββ model_prepped_dataset.csv
βββ model_prepped_dataset.json
βββ test_data.csv
βββ train_data.csv
1. Prediction by Outcome
1. Prediction by Goal Difference
1. Prediction by Goals Scored
Feel free to mail me for any doubts/query βοΈ chagantivenkataramireddy1@gmail.com
You are allowed to cite any part of the code or our dataset. You can use it in your Research Work or Project. Remember to provide credit to the Maintainer Chaganti Reddy by mentioning a link to this repository and her GitHub Profile.
Follow this format:
- Author's name - Chaganti Reddy
- Date of publication or update in parentheses.
- Title or description of document.
- URL.
Made with β€οΈ by Chaganti Reddy
MIT Β© Chaganti Reddy