Logo

The Data Daily

GitHub - IBM/inFairness: PyTorch package to train and audit ML models for Individual Fairness

GitHub - IBM/inFairness: PyTorch package to train and audit ML models for Individual Fairness

PyTorch package to train and audit ML models for Individual Fairness
Insights
IBM/inFairness
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
main
Failed to load latest commit information.
Type
README.md
Individual Fairness and inFairness
Intuitively, an individually fair Machine Learning (ML) model treats similar inputs similarly. Formally, the leading notion of individual fairness is metric fairness (Dwork et al., 2011) ; it requires:
$$ d_y (h(x_1), h(x_2)) \leq L d_x(x_1, x_2) \quad \forall \quad x_1, x_2 \in X $$
Here,
is a ML model, where
$X$
are input and output spaces;
$d_x$
are metrics on the input and output spaces, and
$L \geq 0$
is a Lipchitz constant. This constrained optimization equation states that the distance between the model predictions for inputs
$x_1$
is upper-bounded by the fair distance between the inputs
$x_1$
. Here, the fair metric
$d_x$
encodes our intuition of which samples should be treated similarly by the ML model, and in designing so, we ensure that for input samples considered similar by the fair metric
$d_x$
, the model outputs will be similar as well.
inFairness is a PyTorch package that supports auditing, training, and post-processing ML models for individual fairness. At its core, the library implements the key components of individual fairness pipeline:
$d_x$
- distance in the input space,
$d_y$
- distance in the output space, and the learning algorithms to optimize for the equation above.
For an in-depth tutorial of Individual Fairness and the inFairness package, please watch this tutorial. Also, take a look at the examples folder for illustrative use-cases.
Installation
inFairness can be installed using pip:
pip install inFairness
Alternatively, if you wish to install the latest development version, you can install directly by cloning this repository:
git clone
cd inFairness pip install -e .
Features
Learning individually fair metrics : [Docs]
Training of individually fair models : [Docs]
Auditing pre-trained ML models for individual fairness : [Docs]
Coming soon
We plan to extend the package by integrating the following features:
Post-processing for Individual Fairness : [Paper]
Individually fair ranking : [Paper]
Contributing
We welcome contributions from the community in any form - whether it is through the contribution of a new fair algorithm, fair metric, a new use-case, or simply reporting an issue or enhancement in the package. To contribute code to the package, please follow the following steps:
Clone this git repository to your local system
Setup your system by installing dependencies as: pip3 install -r requirements.txt and pip3 install -r build_requirements.txt
Add your code contribution to the package. Please refer to the inFairness folder for an overview of the directory structure
Add appropriate unit tests in the tests folder
Once you are ready to commit code, check for the following:
Coding style compliance using: flake8 inFairness/. This command will list all stylistic violations found in the code. Please try to fix as much as you can
Ensure all the test cases pass using: coverage run --source inFairness -m pytest tests/. All unit tests need to pass to be able merge code in the package.
Finally, commit your code and raise a Pull Request.
Tutorials
The examples folder contains tutorials from different fields illustrating how to use the package.
Minimal example
First, you need to import the relevant packages
from inFairness import distances from inFairness.fairalgo import SenSeI
The inFairness.distances module implements various distance metrics on the input and the output spaces, and the inFairness.fairalgo implements various individually fair learning algorithms with SenSeI being one particular algorithm.
Thereafter, we instantiate and fit the distance metrics on the training data, and
distance_x = distances.SVDSensitiveSubspaceDistance() distance_y = distances.EuclideanDistance() distance_x.fit(X_train=data, n_components=50) # Finally instantiate the fair algorithm fairalgo = SenSeI(network, distance_x, distance_y, lossfn, rho=1.0, eps=1e-3, lr=0.01, auditor_nsteps=100, auditor_lr=0.1)
Finally, you can train the fairalgo as you would train your standard PyTorch deep neural network:
fairalgo.train() for epoch in range(EPOCHS): for x, y in train_dl: optimizer.zero_grad() result = fairalgo(x, y) result.loss.backward() optimizer.step()
Authors

Images Powered by Shutterstock