This is an open solution to the Home Credit Default Risk challenge 🏡.
Check collection of public projects 🎁, where you can find multiple Kaggle competitions with code, experiments and outputs.
We are building entirely open solution to this competition. Specifically:
- Learning from the process - updates about new ideas, code and experiments is the best way to learn data science. Our activity is especially useful for people who wants to enter the competition, but lack appropriate experience.
- Encourage more Kagglers to start working on this competition.
- Deliver open source solution with no strings attached. Code is available on our GitHub repository 💻. This solution should establish solid benchmark, as well as provide good base for your custom ideas and experiments. We care about clean code 😃
- We are opening our experiments as well: everybody can have live preview on our experiments, parameters, code, etc. Check: Home Credit Default Risk 📈 and screens below.
Train and validation results on folds 📊 | LightGBM learning curves 📊 |
---|---|
In this open source solution you will find references to the neptune.ml. It is free platform for community Users, which we use daily to keep track of our experiments. Please note that using neptune.ml is not necessary to proceed with this solution. You may run it as plain Python script 🐍.
As of 1.07.2019 we officially discontinued neptune-cli
client project making neptune-client
the only supported way to communicate with Neptune.
That means you should run experiments via python ...
command or update loggers to neptune-client
.
For more information about the new client go to neptune-client read-the-docs page.
- Check Kaggle forum and participate in the discussions.
- Check our Wiki pages 🏡, where we document our work. See solutions below:
link to code | name | CV | LB | link to description |
---|---|---|---|---|
solution 1 | chestnut 🌰 | ? | 0.742 | LightGBM and basic features |
solution 2 | seedling 🌱 | ? | 0.747 | Sklearn and XGBoost algorithms and groupby features |
solution 3 | blossom 🌼 | 0.7840 | 0.790 | LightGBM on selected features |
solution 4 | tulip 🌷 | 0.7905 | 0.801 | LightGBM with smarter features |
solution 5 | sunflower 🌻 | 0.7950 | 0.804 | LightGBM clean dynamic features |
solution 6 | four leaf clover 🍀 | 0.7975 | 0.806 | priv. LB 0.79804, Stacking by feature diversity and model diversity |
You can jump start your participation in the competition by using our starter pack. Installation instruction below will guide you through the setup.
- Clone repository and install requirements (use Python3.5)
pip3 install -r requirements.txt
- Register to the neptune.ml (if you wish to use it)
- Run experiment based on LightGBM:
🔱
neptune account login
neptune run --config configs/neptune.yaml main.py train_evaluate_predict_cv --pipeline_name lightGBM
🐍
python main.py -- train_evaluate_predict_cv --pipeline_name lightGBM
Various options of hyperparameter tuning are available
-
Random Search
configs/neptune.yaml
hyperparameter_search__method: random hyperparameter_search__runs: 100
src/pipeline_config.py
'tuner': {'light_gbm': {'max_depth': ([2, 4, 6], "list"), 'num_leaves': ([2, 100], "choice"), 'min_child_samples': ([5, 10, 15 25, 50], "list"), 'subsample': ([0.95, 1.0], "uniform"), 'colsample_bytree': ([0.3, 1.0], "uniform"), 'min_gain_to_split': ([0.0, 1.0], "uniform"), 'reg_lambda': ([1e-8, 1000.0], "log-uniform"), }, }
You are welcome to contribute your code and ideas to this open solution. To get started:
- Check competition project on GitHub to see what we are working on right now.
- Express your interest in paticular task by writing comment in this task, or by creating new one with your fresh idea.
- We will get back to you quickly in order to start working together.
- Check CONTRIBUTING for some more information.
There are several ways to seek help:
- Kaggle discussion is our primary way of communication.
- Read project's Wiki, where we publish descriptions about the code, pipelines and supporting tools such as neptune.ml.
- Submit an issue directly in this repo.