Kedro is an open-source Python framework for creating reproducible, maintainable and modular data science code. It borrows concepts from software engineering and applies them to machine-learning code; applied concepts include modularity, separation of concerns and versioning.
To install Kedro from the Python Package Index (PyPI) simply run:
pip install kedro
It is also possible to install Kedro using conda
:
conda install -c conda-forge kedro
Our Get Started guide contains full installation instructions, and includes how to set up Python virtual environments.
A pipeline visualisation generated using Kedro-Viz
Feature | What is this? |
---|---|
Project Template | A standard, modifiable and easy-to-use project template based on Cookiecutter Data Science. |
Data Catalog | A series of lightweight data connectors used to save and load data across many different file formats and file systems, including local and network file systems, cloud object stores, and HDFS. The Data Catalog also includes data and model versioning for file-based systems. |
Pipeline Abstraction | Automatic resolution of dependencies between pure Python functions and data pipeline visualisation using Kedro-Viz. |
Coding Standards | Test-driven development using pytest , produce well-documented code using Sphinx, create linted code with support for flake8 , isort and black and make use of the standard Python logging library. |
Flexible Deployment | Deployment strategies that include single or distributed-machine deployment as well as additional support for deploying on Argo, Prefect, Kubeflow, AWS Batch and Databricks. |
The Kedro documentation includes three examples to help get you started:
- A typical "Hello World" example, for an entry-level description of the main Kedro concepts
- An introduction to the project template using the Iris dataset
- A more detailed spaceflights tutorial to give you hands-on experience
Kedro is built upon our collective best-practice (and mistakes) trying to deliver real-world ML applications that have vast amounts of raw unvetted data. We developed Kedro to achieve the following:
- To address the main shortcomings of Jupyter notebooks, one-off scripts, and glue-code because there is a focus on creating maintainable data science code
- To enhance team collaboration when different team members have varied exposure to software engineering concepts
- To increase efficiency, because applied concepts like modularity and separation of concerns inspire the creation of reusable analytics code
Kedro is maintained by a product team from QuantumBlack and a number of contributors from across the world.
Yes! Want to help build Kedro? Check out our guide to contributing to Kedro.
There is a growing community around Kedro. Have a look at the Kedro FAQs to find projects using Kedro and links to articles, podcasts and talks.
There are Kedro users across the world, who work at start-ups, major enterprises and academic institutions like Absa, Acensi, AI Singapore, AXA UK, Belfius, Caterpillar, CRIM, Dendra Systems, Element AI, GMO, Imperial College London, Jungle Scout, Helvetas, Leapfrog, McKinsey & Company, Mercado Libre Argentina, Modec, Mosaic Data Science, NaranjaX, Open Data Science LatAm, Prediqt, QuantumBlack, Retrieva, Roche, Sber, Telkomsel, Universidad Rey Juan Carlos, UrbanLogiq, Wildlife Studios, WovenLight and XP.
Kedro has also won Best Technical Tool or Framework for AI in the 2019 Awards AI competition and a merit award for the 2020 UK Technical Communication Awards. It is listed on the 2020 ThoughtWorks Technology Radar and the 2020 Data & AI Landscape.
If you're an academic, Kedro can also help you, for example, as a tool to solve the problem of reproducible research. Find our citation reference on Zenodo.