This repository aim to provide a simple summary of ML(Machine Learning) reporting guidelines.
There are many reporting guidelines in development for the clinical evaluation of AI/ML-enabled medical research and development in real-world settings.
The reporting guidelines can be useful because they can provide the key part of the evidence that is used when checking whether an AI technology can be used sufficiently safely and effectively.
Minimum reporting guidelines for clinical evaluation can be instrumental in improving the quality of clinical evaluation and promoting completeness and transparency of reporting for evaluating AI/ML-enabled product development.
- Assessment of Adherence to Reporting Guidelines by Commonly Used Clinical Prediction Models From a Single Vendor A Systematic Review (JAMA, 2022) - (Google Scholar)
- Completeness of reporting of clinical prediction models developed using supervised machine learning: a systematic review (BMC Medical Research Methodology, 2022) - (Google Scholar)
- Review of study reporting guidelines for clinical studies using artificial intelligence in healthcare (BMJ Health Care Inform, 2021) - (Google Scholar)
- Reporting quality of studies using machine learning models for medical diagnosis: a systematic review (BMJ Open, 2019) - (Google Scholar)
- CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials (International Journal of Surgery, 2010) - (Google Scholar)
- CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomized trials (Journal of Clinical Epidemiology, 2010) - (Google Scholar)
- Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension (Nature Medicine, 2020) - (Google Scholar)
- Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI Extension (BMJ, 2020)
- Reporting guidelines for clinical trials of artificial intelligence interventions: the SPIRIT-AI and CONSORT-AI guidelines (Trials, 2021) - (Google Scholar)
- Equator - Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI Extension
- Risk prediction models: I. Development, internal validation, and assessing the incremental value of a new (bio)marker (Heart, 2012) - (Google Scholar)
- Risk prediction models: II. External validation, model updating, and impact assessment (Heart, 2012) - (Google Scholar)
- Equator -
- SPIRIT 2013 Statement: Defining Standard Protocol Items for Clinical Trials (Annals of Internal Medicine, 2013) - (Google Scholar)
- SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials (BMJ, 2013)
- Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension (BMJ, 2020) - (Google Scholar)
- Reporting guidelines for clinical trials of artificial intelligence interventions: the SPIRIT-AI and CONSORT-AI guidelines (Trials, 2021) - (Google Scholar)
- Equator - Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI Extension
- Toward better clinical prediction models: seven steps for development and an ABCD for validation (European Heart Journal, 2014) (Google Scholar)
- Equator -
- Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modeling Studies: The CHARMS Checklist (PLoS Medicine, 2014) - (Google Scholar)
- Equator -
- Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): The TRIPOD Statement (British Journal of Surgery, 2015) - (Google Scholar)
- Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): Explanation and Elaboration (Annals of Internal Medicine, 2015) - (Google Scholar)
- Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence (BMJ Open, 2021) - (Google Scholar)
- Equator -
- STARD 2015 guidelines for reporting diagnostic accuracy studies:explanation and elaboration (BMJ Open, 2016) - (Google Scholar)
- Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol (BMJ Open, 2021) - (Google Scholar)
- Equator -
- Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View (Journal of Medical Internet Research, 2016) - (Google Scholar)
- Equator -
- What’s your ML test score? A rubric for ML production systems (NIPS 2016 Workship) - (Google Scholar)
- The ML Test Score: A Rubric for ML Production Readiness and Technical Debt Reduction (Proceedings of the 2017 IEEE International Conference on Big Data, 2017), Google Research - (Google Scholar)
- Equator -
- Youtube
- PROBAST: A Tool to Assess the Risk of Bias and Applicability of Prediction Model Studies (Annals of Internal Medicine, 2019) - (Google Scholar)
- PROBAST: A Tool to Assess Risk of Bias and Applicability of Prediction Model Studies: Explanation and Elaboration (Annals of Internal Medicine, 2019) - (Google Scholar)
- Equator -
- Model Cards for Model Reporting (Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019), ACM Library - (Google Scholar)
- Equator -
- Presenting machine learning model information to clinical end users with model facts labels (NPJ Digital Medicine, 2020) - (Google Scholar)
- Equator -
- MINIMAR (MINimum Information for Medical AI Reporting): Developing reporting standards for artificial intelligence in health care (Journal of the American Medical Information Association, 2020) - (Google Scholar)
- Equator -
- Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist (Nature Medicine, 2020) - (Google Scholar)
- Equator - Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist
- Github
- AI-Enabled Clinical Decision Support Software: A “Trust and Value Checklist” for Clinicians (NEJM Catalyst, 2020) - (Google Scholar)
- Equator -
- DECIDE-AI: new reporting guidelines to bridge the development-to-implementation gap in clinical artificial intelligence (Nature Medicine, 2021) - (Google Scholar)
- Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI (Nature Medicine, 2022) - (Google Scholar)
- Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI (BMJ, 2022)
- Equator - Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI
Issues and Pull Requests are greatly appreciated. If you've never contributed to an open source project before I'm more than happy to walk you through how to create a pull request.
You can start by opening an issue describing the problem that you're looking to resolve and we'll go from there.
This document is licensed under the MIT license © Jonghong Jeon, 2022