AutoGluon automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on text, image, and tabular data.
# First install package from terminal:
# python3 -m pip install --upgrade pip
# python3 -m pip install --upgrade setuptools
# python3 -m pip install --upgrade "mxnet<2.0.0"
# python3 -m pip install --pre autogluon
from autogluon.tabular import TabularPrediction as task
train_data = task.Dataset(file_path='https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')
test_data = task.Dataset(file_path='https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv')
predictor = task.fit(train_data=train_data, label='class')
performance = predictor.evaluate(test_data)
Announcement for previous users: The AutoGluon codebase has been modularized into namespace packages, which means you now only need those dependencies relevant to your prediction task of interest! For example, you can now work with tabular data without having to install dependencies required for AutoGluon's computer vision tasks (and vice versa). Unfortunately this improvement required a minor API change (eg. instead of from autogluon import TabularPrediction
, you should now do: from autogluon.tabular import TabularPrediction
), for all versions newer than v0.0.15. Documentation/tutorials under the old API may still be viewed for version 0.0.15 which is the last released version under the old API.
See the AutoGluon Website for documentation and instructions on:
-
- Tips to maximize accuracy (if benchmarking, make sure to run
fit()
with argumentpresets='best_quality'
).
- Tips to maximize accuracy (if benchmarking, make sure to run
-
More advanced topics such as Neural Architecture Search
- AutoGluon for tabular data: 3 lines of code to achieve top 1% in Kaggle competitions (AWS Open Source Blog, Mar 2020)
- Accurate image classification in 3 lines of code with AutoGluon (Medium, Feb 2020)
- AutoGluon overview & example applications (Towards Data Science, Dec 2019)
- From HPO to NAS: Automated Deep Learning (CVPR 2020)
- Practical Automated Machine Learning with Tabular, Text, and Image Data (KDD 2020)
- AutoGluon-Tabular on AWS Marketplace
- Running AutoGluon-Tabular on Amazon SageMaker
- Running AutoGluon Image Classification on Amazon SageMaker
If you use AutoGluon in a scientific publication, please cite the following paper:
Erickson, Nick, et al. "AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data." arXiv preprint arXiv:2003.06505 (2020).
BibTeX entry:
@article{agtabular,
title={AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data},
author={Erickson, Nick and Mueller, Jonas and Shirkov, Alexander and Zhang, Hang and Larroy, Pedro and Li, Mu and Smola, Alexander},
journal={arXiv preprint arXiv:2003.06505},
year={2020}
}
AutoGluon also provides state-of-the-art tools for neural hyperparameter and architecture search, such as for example ASHA, Hyperband, Bayesian Optimization and BOHB. To get started, checkout the following resources
- General introduction into HNAS
- Introduction into HNAS with AutoGluon
- Example notebook
- Example scripts for efficient multi-fidelity HNAS of PyTorch neural network models
Also have a look at our paper "Model-based Asynchronous Hyperparameter and Neural Architecture Search" arXiv preprint arXiv:2003.10865 (2020).
@article{abohb,
title={Model-based Asynchronous Hyperparameter and Neural Architecture Search},
author={Klein, Aaron and Tiao, Louis and Lienart, Thibaut and Archambeau, Cedric and Seeger, Matthias},
journal={arXiv preprint arXiv:2003.10865},
year={2020}
}
This library is licensed under the Apache 2.0 License.
We are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read the Contributing Guide to get started.