fairness-ai

There are 127 repositories under fairness-ai topic.

  • giskard

    Giskard-AI/giskard

    🐢 Open-Source Evaluation & Testing for LLMs and ML models

    Language:Python3.5k27427220
  • Trusted-AI/AIF360

    A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

    Language:Python2.3k90248818
  • fairlearn/fairlearn

    A Python package to assess and improve fairness of machine learning models.

    Language:Python1.8k37448402
  • microsoft/responsible-ai-toolbox

    Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

    Language:TypeScript1.3k29276327
  • dccuchile/wefe

    WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!

    Language:Python17162215
  • linkedin/LiFT

    The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.

    Language:Scala16715321
  • microsoft/SafeNLP

    Safety Score for Pre-Trained Language Models

    Language:Python91626
  • ResponsiblyAI/responsibly

    Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰

    Language:Python906720
  • firmai/ml-fairness-framework

    FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)

    Language:Jupyter Notebook715014
  • aws/amazon-sagemaker-clarify

    Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.

    Language:Python66201337
  • inFairness

    IBM/inFairness

    PyTorch package to train and audit ML models for Individual Fairness

    Language:Python637227
  • brandeis-machine-learning/awesome-ml-fairness

    Papers and online resources related to machine learning fairness

  • pliang279/sent_debias

    [ACL 2020] Towards Debiasing Sentence Representations

    Language:Python563419
  • influenciae

    deel-ai/influenciae

    👋 Influenciae is a Tensorflow Toolbox for Influence Functions

    Language:Python543143
  • pliang279/LM_bias

    [ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models

    Language:Python53418
  • CODAIT/presentations

    Talks & Workshops by the CODAIT team

    Language:Jupyter Notebook5230018
  • credo-ai/credoai_lens

    Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.

    Language:Python457216
  • fidelity/jurity

    [ICMLA 2021] Jurity: Fairness & Evaluation Library

    Language:Python411304
  • AthenaCore/AwesomeResponsibleAI

    A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible AI, Trustworthy AI, and Human-Centered AI.

  • txsun1997/Metric-Fairness

    EMNLP'2022: BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation

    Language:Jupyter Notebook37114
  • kozodoi/fairness

    R package for computing and visualizing fair ML metrics

    Language:R30387
  • microsoft/responsible-ai-toolbox-genbit

    A tool for gender bias identification in text. Part of Microsoft's Responsible AI toolbox.

    Language:Python301107
  • mlr-org/mcboost

    Multi-Calibration & Multi-Accuracy Boosting for R

    Language:R305364
  • ClearExplanationsAI/CLEAR

    Counterfactual Local Explanations of AI systems

    Language:Python28116
  • dbountouridis/siren

    SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments

    Language:Python28238
  • wearepal/EthicML

    Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency

    Language:Python244853
  • microsoft/responsible-ai-workshop

    Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice

    Language:Jupyter Notebook23416
  • cylynx/verifyml

    Open-source toolkit to help companies implement responsible AI workflows.

    Language:Python22173
  • jphall663/hc_ml

    Slides, videos and other potentially useful artifacts from various presentations on responsible machine learning.

    Language:TeX22408
  • aida-ugent/fairret

    A fairness library in PyTorch.

    Language:Python21400
  • oracle-samples/automlx

    This repository contains demo notebooks (sample code) for the AutoMLx (automated machine learning and explainability) package from Oracle Labs.

  • yuji-roh/fairbatch

    FairBatch: Batch Selection for Model Fairness (ICLR 2021)

    Language:Python20224
  • heyaudace/ml-bias-fairness

    Data and Model-based approaches for Mitigating Bias in Machine Learning Applications

    Language:Jupyter Notebook193010
  • monk1337/Awesome-Robust-Machine-Learning

    A curated list of Robust Machine Learning papers/articles and recent advancements.

  • valeria-io/bias-in-credit-models

    Examples of unfairness detection for a classification-based credit model

    Language:Jupyter Notebook19103
  • ajsanjoaquin/Shapley_Valuation

    PyTorch reimplementation of computing Shapley values via Truncated Monte Carlo sampling from "What is your data worth? Equitable Valuation of Data" by Amirata Ghorbani and James Zou [ICML 2019]

    Language:Python18214