AIF360 | comprehensive set of fairness metrics | Artificial Intelligence library
kandi X-RAY | AIF360 Summary
kandi X-RAY | AIF360 Summary
The AI Fairness 360 toolkit is an extensible open-source library containg techniques developed by the research community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle. AI Fairness 360 package is available in both Python and R. The AI Fairness 360 package includes. The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available. Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted. We have developed the package with extensibility in mind. This library is still in development. We encourage the contribution of your metrics, explainers, and debiasing algorithms.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Performs a fairness check
- Compute accuracy
- Returns the performance measures
- Compute the number of negative negatives
- Transform a training dataset
- Convert to pandas dataframe
- Apply randomizing transformation
- Load preprocessing data
- Make a copy of this transformer
- Fit the model on the data
- Creates a copy of this transformer
- Transform a dataset
- Compute the number of TP FP FP FN
- Compute the number of genotyped genotypes
- Align two datasets
- Clean dataset
- Plots a heatmap of the predicted features
- Returns a pandas dataframe containing the bias for each observation
- Fit the model to kamishima format
- Predict for a given dataset
- Calculate gradient loss
- Fetch Adult census data
- Performs a MDS SDS bias scan
- Predict the model
- Compute the MDS SDS bias score
- Computes a pandas DataFrame with the observed data
- Predict for each feature
AIF360 Key Features
AIF360 Examples and Code Snippets
conda create --name aif360 python=3.5
conda activate aif360
git clone https://github.com/IBM/AIF360
conda install -c r r-essentials
git clone https://github.com/monindersingh/pydata2018_fairAI_models_tutorial.git
jupyter notebook pydata_datasets.
$ python run_experiments.py
$ python run_experiments.py adult_spd_1/
Community Discussions
Trending Discussions on AIF360
QUESTION
I would like to use the mdss_bias_scan function of aif360 for detecting the combination of variables that make up the privileged group and the non-privileged group.
When I try to import the function:
from aif360.sklearn.metrics import mdss_bias_scan
I get the following error:
Import error: cannot import 'mdss_bias_scan' from 'aif360.sklearn.metrics'
.
Can you help me to fix it?
...ANSWER
Answered 2022-Feb-09 at 10:54The function mdss_bias_scan
is not available in the version of aif360
you're using (v0.4.0
).
Here's the source code of the file metrics.py
at tag v0.4.0
.
The function mdss_bias_scan
was added via this commit which has not yet been released.
From the GitHub Source, it seems that you should import it as:
QUESTION
I am trying to run AI Fairness 360 metrics on skit-learn (imbalanced-learn) algorithms, but I have a problem with my code. The problem is when I apply skit-learn (imbalanced-learn) algorithms like SMOTE, it return a numpy array. While AI Fairness 360 preprocessing methods return BinaryLabelDataset. Then the metrics should receive an object from BinaryLabelDataset class. I am stuck in how to convert my arrays to BinaryLabelDataset to be able to use measures.
My preprocessing algorithm needs to receive X,Y. So, I split the dataset before calling SMOTE method into X and Y. The dataset before using SMOTE was standard_dataset and it was ok to use metrics, but the problem after I used SMOTE method because it converts data to numpy array.
I got the following error after running the code :
...ANSWER
Answered 2021-Sep-21 at 17:34You are correct that the problem is with y_pred
. You can concatenate it to X_test
, transform it to a StandardDataset
object, and then pass that one to the BinaryLabelDatasetMetric
. The output object will have the methods for calculating different fairness metrics. I do not know how your dataset looks like, but here is a complete reproducible example that you can adapt to do this process for your dataset.
QUESTION
I'm trying to use the aif360 library of ibm for debiasing. I'm working on a linear regression model and want to try out a metric to calculate the difference between the priviliged and unpriviliged groups. However when this code is run I get the following error:
TypeError: difference() missing 1 required positional argument: 'metric_fun'
I've looked into the class for this function but they are referring to a metric_fun, also read the docs but didn't get any further. The function is missing an argument, but I don't know which argument it expects.
A short snippit of the code is:
...ANSWER
Answered 2020-Oct-17 at 20:28Well, without knowing anything about the library you're using, the error message still seems pretty clear, especially since you only call difference
once, like this:
QUESTION
I want to calculate group fairness metrics using AIF360. This is a sample dataset and model, in which gender is the protected attribute and income is the target.
...ANSWER
Answered 2020-Oct-23 at 22:47Remove the y_true=
and y_pred=
characters in the function call and retry. As one can see in the documentation, *y
within the function prototype stands for arbitrary number of arguments (see this post). So this is the most logical guess.
In other words, y_true
and y_pred
are NOT keyword arguments. So they cannot be passed with their names. Keyword arguments are expressed as **kwargs
within a function prototype.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install AIF360
Clone the latest version of this repository:. If you'd like to run the examples, download the datasets now and place them in their respective folders as described in aif360/data/README.md.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page