fairness-ai
There are 127 repositories under fairness-ai topic.
Giskard-AI/giskard
🐢 Open-Source Evaluation & Testing for LLMs and ML models
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
dccuchile/wefe
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
linkedin/LiFT
The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.
microsoft/SafeNLP
Safety Score for Pre-Trained Language Models
ResponsiblyAI/responsibly
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
firmai/ml-fairness-framework
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
aws/amazon-sagemaker-clarify
Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.
IBM/inFairness
PyTorch package to train and audit ML models for Individual Fairness
brandeis-machine-learning/awesome-ml-fairness
Papers and online resources related to machine learning fairness
pliang279/sent_debias
[ACL 2020] Towards Debiasing Sentence Representations
deel-ai/influenciae
👋 Influenciae is a Tensorflow Toolbox for Influence Functions
pliang279/LM_bias
[ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models
CODAIT/presentations
Talks & Workshops by the CODAIT team
credo-ai/credoai_lens
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
fidelity/jurity
[ICMLA 2021] Jurity: Fairness & Evaluation Library
AthenaCore/AwesomeResponsibleAI
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible AI, Trustworthy AI, and Human-Centered AI.
txsun1997/Metric-Fairness
EMNLP'2022: BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation
kozodoi/fairness
R package for computing and visualizing fair ML metrics
microsoft/responsible-ai-toolbox-genbit
A tool for gender bias identification in text. Part of Microsoft's Responsible AI toolbox.
mlr-org/mcboost
Multi-Calibration & Multi-Accuracy Boosting for R
ClearExplanationsAI/CLEAR
Counterfactual Local Explanations of AI systems
dbountouridis/siren
SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments
wearepal/EthicML
Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency
microsoft/responsible-ai-workshop
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
cylynx/verifyml
Open-source toolkit to help companies implement responsible AI workflows.
jphall663/hc_ml
Slides, videos and other potentially useful artifacts from various presentations on responsible machine learning.
aida-ugent/fairret
A fairness library in PyTorch.
oracle-samples/automlx
This repository contains demo notebooks (sample code) for the AutoMLx (automated machine learning and explainability) package from Oracle Labs.
yuji-roh/fairbatch
FairBatch: Batch Selection for Model Fairness (ICLR 2021)
heyaudace/ml-bias-fairness
Data and Model-based approaches for Mitigating Bias in Machine Learning Applications
monk1337/Awesome-Robust-Machine-Learning
A curated list of Robust Machine Learning papers/articles and recent advancements.
valeria-io/bias-in-credit-models
Examples of unfairness detection for a classification-based credit model
ajsanjoaquin/Shapley_Valuation
PyTorch reimplementation of computing Shapley values via Truncated Monte Carlo sampling from "What is your data worth? Equitable Valuation of Data" by Amirata Ghorbani and James Zou [ICML 2019]