/guardian-ai

Oracle Guardian AI Open Source Project is a library consisting of tools to assess fairness/bias and privacy of machine learning models and data sets.

Primary LanguagePythonUniversal Permissive License v1.0UPL-1.0

Oracle Guardian AI Open Source Project

PyPI Python Code style: black

Oracle Guardian AI Open Source Project is a library consisting of tools to assess fairness/bias and privacy of machine learning models and data sets. This package contains fairness and privacy_estimation modules.

The Fairness module offers tools to help you diagnose and understand the unintended bias present in your dataset and model so that you can make steps towards more inclusive and fair applications of machine learning.

The Privacy Estimation module helps estimate potential leakage of sensitive information in the training data through attacks on Machine Learning (ML) models. The main idea is to carry out Membership Inference Attacks on a given target model trained on a given sensitive dataset, and measure their success to estimate the risk of leakage.

Installation

You have various options when installing oracle-guardian-ai.

Installing the oracle-guardian-ai base package

python3 -m pip install oracle-guardian-ai

Installing extras libraries

The all-optional module will install all optional dependencies. Note the single quotes around installation of extra libraries.

python3 -m pip install 'oracle-guardian-ai[all-optional]'

To work with fairness/bias, install the fairness module. You can find extra dependencies in requirements-fairness.txt.

python3 -m pip install 'oracle-guardian-ai[fairness]'

To work with privacy estimation, install the privacy module. You can find extra dependencies in requirements-privacy.txt.

python3 -m pip install 'oracle-guardian-ai[privacy]'

Documentation

Examples

Measurement with a Fairness Metric

from guardian_ai.fairness.metrics import ModelStatisticalParityScorer
fairness_score = ModelStatisticalParityScorer(protected_attributes='<target_attribute>')

Bias Mitigation

from guardian_ai.fairness.bias_mitigation import ModelBiasMitigator
bias_mitigated_model = ModelBiasMitigator(
    model,
    protected_attribute_names='<target_attribute>',
    fairness_metric="statistical_parity",
    accuracy_metric="balanced_accuracy",
)

bias_mitigated_model.fit(X_val, y_val)
bias_mitigated_model.predict(X_test)

Contributing

This project welcomes contributions from the community. Before submitting a pull request, please review our contribution guide.

Find Getting Started instructions for developers in README-development.md.

Security

Consult the security guide SECURITY.md for our responsible security vulnerability disclosure process.

License

Copyright (c) 2023 Oracle and/or its affiliates. Licensed under the Universal Permissive License v1.0.