AIF360 | comprehensive set of fairness metrics | Artificial Intelligence library

 by   Trusted-AI Python Version: 0.6.1 License: Apache-2.0

kandi X-RAY | AIF360 Summary

kandi X-RAY | AIF360 Summary

AIF360 is a Python library typically used in Institutions, Learning, Education, Artificial Intelligence, Deep Learning applications. AIF360 has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can install using 'pip install AIF360' or download it from GitHub, PyPI.

The AI Fairness 360 toolkit is an extensible open-source library containg techniques developed by the research community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle. AI Fairness 360 package is available in both Python and R. The AI Fairness 360 package includes. The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available. Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted. We have developed the package with extensibility in mind. This library is still in development. We encourage the contribution of your metrics, explainers, and debiasing algorithms.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              AIF360 has a medium active ecosystem.
              It has 2048 star(s) with 688 fork(s). There are 89 watchers for this library.
              There were 2 major release(s) in the last 12 months.
              There are 141 open issues and 87 have been closed. On average issues are closed in 85 days. There are 29 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of AIF360 is 0.6.1

            kandi-Quality Quality

              AIF360 has 0 bugs and 0 code smells.

            kandi-Security Security

              AIF360 has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              AIF360 code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              AIF360 is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              AIF360 releases are available to install and integrate.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi has reviewed AIF360 and discovered the below as its top functions. This is intended to give you an instant insight into AIF360 implemented functionality, and help decide if they suit your requirements.
            • Performs a fairness check
            • Compute accuracy
            • Returns the performance measures
            • Compute the number of negative negatives
            • Transform a training dataset
            • Convert to pandas dataframe
            • Apply randomizing transformation
            • Load preprocessing data
            • Make a copy of this transformer
            • Fit the model on the data
            • Creates a copy of this transformer
            • Transform a dataset
            • Compute the number of TP FP FP FN
            • Compute the number of genotyped genotypes
            • Align two datasets
            • Clean dataset
            • Plots a heatmap of the predicted features
            • Returns a pandas dataframe containing the bias for each observation
            • Fit the model to kamishima format
            • Predict for a given dataset
            • Calculate gradient loss
            • Fetch Adult census data
            • Performs a MDS SDS bias scan
            • Predict the model
            • Compute the MDS SDS bias score
            • Computes a pandas DataFrame with the observed data
            • Predict for each feature
            Get all kandi verified functions for this library.

            AIF360 Key Features

            No Key Features are available at this moment for AIF360.

            AIF360 Examples and Code Snippets

            copy iconCopy
            conda create --name aif360 python=3.5
            conda activate aif360
            
            git clone https://github.com/IBM/AIF360
            
            conda install -c r r-essentials
            
            git clone https://github.com/monindersingh/pydata2018_fairAI_models_tutorial.git
            
            jupyter notebook pydata_datasets.  
            copy iconCopy
            $ python run_experiments.py 
            
            $ python run_experiments.py adult_spd_1/
              
            Post-Hoc Methods for Debiasing Neural Networks,Requirements
            Pythondot img3Lines of Code : 1dot img3License : Permissive (Apache-2.0)
            copy iconCopy
            $ pip install -r requirements.txt
              

            Community Discussions

            QUESTION

            Import Error: cannot import 'mdss_bias_scan' from 'aif360.sklearn.metrics'
            Asked 2022-Feb-09 at 10:54

            I would like to use the mdss_bias_scan function of aif360 for detecting the combination of variables that make up the privileged group and the non-privileged group.

            When I try to import the function: from aif360.sklearn.metrics import mdss_bias_scan

            I get the following error: Import error: cannot import 'mdss_bias_scan' from 'aif360.sklearn.metrics'.

            Can you help me to fix it?

            ...

            ANSWER

            Answered 2022-Feb-09 at 10:54
            Update

            The function mdss_bias_scan is not available in the version of aif360 you're using (v0.4.0).

            Here's the source code of the file metrics.py at tag v0.4.0.

            The function mdss_bias_scan was added via this commit which has not yet been released.

            From the GitHub Source, it seems that you should import it as:

            Source https://stackoverflow.com/questions/71046666

            QUESTION

            A problem in using AIF360 metrics in my code
            Asked 2022-Jan-29 at 15:28

            I am trying to run AI Fairness 360 metrics on skit-learn (imbalanced-learn) algorithms, but I have a problem with my code. The problem is when I apply skit-learn (imbalanced-learn) algorithms like SMOTE, it return a numpy array. While AI Fairness 360 preprocessing methods return BinaryLabelDataset. Then the metrics should receive an object from BinaryLabelDataset class. I am stuck in how to convert my arrays to BinaryLabelDataset to be able to use measures.

            My preprocessing algorithm needs to receive X,Y. So, I split the dataset before calling SMOTE method into X and Y. The dataset before using SMOTE was standard_dataset and it was ok to use metrics, but the problem after I used SMOTE method because it converts data to numpy array.

            I got the following error after running the code :

            ...

            ANSWER

            Answered 2021-Sep-21 at 17:34

            You are correct that the problem is with y_pred. You can concatenate it to X_test, transform it to a StandardDataset object, and then pass that one to the BinaryLabelDatasetMetric. The output object will have the methods for calculating different fairness metrics. I do not know how your dataset looks like, but here is a complete reproducible example that you can adapt to do this process for your dataset.

            Source https://stackoverflow.com/questions/69082773

            QUESTION

            TypeError: ratio() missing 1 required positional argument: 'metric_fun'
            Asked 2020-Nov-24 at 20:22

            I'm trying to use the aif360 library of ibm for debiasing. I'm working on a linear regression model and want to try out a metric to calculate the difference between the priviliged and unpriviliged groups. However when this code is run I get the following error:

            TypeError: difference() missing 1 required positional argument: 'metric_fun'

            I've looked into the class for this function but they are referring to a metric_fun, also read the docs but didn't get any further. The function is missing an argument, but I don't know which argument it expects.

            A short snippit of the code is:

            ...

            ANSWER

            Answered 2020-Oct-17 at 20:28

            Well, without knowing anything about the library you're using, the error message still seems pretty clear, especially since you only call difference once, like this:

            Source https://stackoverflow.com/questions/64406879

            QUESTION

            Calculate group fairness metrics with AIF360
            Asked 2020-Oct-26 at 18:36

            I want to calculate group fairness metrics using AIF360. This is a sample dataset and model, in which gender is the protected attribute and income is the target.

            ...

            ANSWER

            Answered 2020-Oct-23 at 22:47

            Remove the y_true= and y_pred= characters in the function call and retry. As one can see in the documentation, *y within the function prototype stands for arbitrary number of arguments (see this post). So this is the most logical guess.

            In other words, y_true and y_pred are NOT keyword arguments. So they cannot be passed with their names. Keyword arguments are expressed as **kwargs within a function prototype.

            Source https://stackoverflow.com/questions/64506977

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install AIF360

            To install the latest stable version from PyPI, run:.
            Clone the latest version of this repository:. If you'd like to run the examples, download the datasets now and place them in their respective folders as described in aif360/data/README.md.

            Support

            Optimized Preprocessing (Calmon et al., 2017)Disparate Impact Remover (Feldman et al., 2015)Equalized Odds Postprocessing (Hardt et al., 2016)Reweighing (Kamiran and Calders, 2012)Reject Option Classification (Kamiran et al., 2012)Prejudice Remover Regularizer (Kamishima et al., 2012)Calibrated Equalized Odds Postprocessing (Pleiss et al., 2017)Learning Fair Representations (Zemel et al., 2013)Adversarial Debiasing (Zhang et al., 2018)Meta-Algorithm for Fair Classification (Celis et al.. 2018)Rich Subgroup Fairness (Kearns, Neel, Roth, Wu, 2018)Exponentiated Gradient Reduction (Agarwal et al., 2018)Grid Search Reduction (Agarwal et al., 2018, Agarwal et al., 2019)Fair Data Adaptation (Plečko and Meinshausen, 2020, Plečko et. al., 2021)
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install aif360

          • CLONE
          • HTTPS

            https://github.com/Trusted-AI/AIF360.git

          • CLI

            gh repo clone Trusted-AI/AIF360

          • sshUrl

            git@github.com:Trusted-AI/AIF360.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link