responsible-ai
There are 155 repositories under responsible-ai topic.
EthicalML/awesome-production-machine-learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Giskard-AI/giskard
🐢 Open-Source Evaluation & Testing for AI & LLM systems
Azure/PyRIT
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
ModelOriented/DALEX
moDel Agnostic Language for Exploration and eXplanation
microsoft/rag-time
RAG Time: A 5-week Learning Journey to Mastering RAG
JohnSnowLabs/langtest
Deliver safe & effective language models
tensorflow/model-card-toolkit
A toolkit that streamlines and automates the generation of model cards
hbaniecki/adversarial-explainable-ai
💡 Adversarial attacks on explanations and how to defend them
cvs-health/langfair
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
serodriguez68/designing-ml-systems-summary
A detailed summary of "Designing Machine Learning Systems" by Chip Huyen. This book gives you and end-to-end view of all the steps required to build AND OPERATE ML products in production. It is a must-read for ML practitioners and Software Engineers Transitioning into ML.
natnew/Awesome-Data-Science
Carefully curated list of awesome data science resources.
EzgiKorkmaz/adversarial-reinforcement-learning
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Inf-imagine/Sentry
[NeurIPS 2023] Sentry-Image: Detect Any AI-generated Images
ml-for-high-risk-apps-book/Machine-Learning-for-High-Risk-Applications-Book
Official code repo for the O'Reilly Book - Machine Learning for High-Risk Applications
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
AthenaCore/AwesomeResponsibleAI
A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustworthy, and Human-Centered AI.
LabeliaLabs/referentiel-evaluation-dsrc
Référentiel d'évaluation data science responsable et de confiance
romanlutz/ResponsibleAI
A collection of news articles, books, and papers on Responsible AI cases. The purpose is to study these cases and learn from them to avoid repeating the failures of the past.
humansensinglab/ITI-GEN
[ICCV 2023 Oral, Best Paper Finalist] ITI-GEN: Inclusive Text-to-Image Generation
IBM/inFairness
PyTorch package to train and audit ML models for Individual Fairness
microsoft/responsible-ai-toolbox-mitigations
Python library for implementing Responsible AI mitigations.
oracle/guardian-ai
Oracle Guardian AI Open Source Project is a library consisting of tools to assess fairness/bias and privacy of machine learning models and data sets.
credo-ai/credoai_lens
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
microsoft/responsible-ai-workshop
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
zhihengli-UR/StyleT2I
Official code of "StyleT2I: Toward Compositional and High-Fidelity Text-to-Image Synthesis" (CVPR 2022)
dogweather/forkful
An open-content programming cookbook. A responsible use of AI proof of concept. Collaborative, polyglot and multilingual.
mlr-org/mcboost
Multi-Calibration & Multi-Accuracy Boosting for R
zhihengli-UR/DebiAN
Official code of "Discover and Mitigate Unknown Biases with Debiasing Alternate Networks" (ECCV 2022)
JGalego/awesome-safety-critical-ai
A curated list of references on the role of AI in safety-critical systems ⚠️
cylynx/verifyml
Open-source toolkit to help companies implement responsible AI workflows.
hupe1980/aisploit
🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.
koo-ec/Awesome-LLM-Explainability
A curated list of explainability-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights into the explainability implications, challenges, and advancements surrounding these powerful models.
wearepal/EthicML
Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency
PAIR-code/farsight
In situ interactive widgets for responsible AI 🌱