Explainability and Interpretability

share link

by Ashok Balasubramanian dot icon Updated: Dec 21, 2021

technology logo
technology logo

Guide Kit Guide Kit Β 

Will you trust a potential global war threat decision with AI? Reuters reported that Deputy Secretary of Defense Kathleen Hicks was briefed on a new software created by US military commanders in the Pacific that can predict Chinese reaction to US actions in the region. The tool looks at data since early 2020. It predicts response across various activities such as congressional visits to Taiwan, arms sales to allies in the region, or when several US ships sail through the Taiwan Strait. It is heartening to see AI mature into strategic roles, especially in the backdrop of Zillow iBuying algorithms causing a loss of more than $300m a few weeks ago and costing over 2000 jobs and an unsold inventory of 7000 homes! Well, the answer lies in strategic oversight. Algorithmic decisions reflect data quality, rigorous training, and introduced biases, among other factors. Both these situations reflect on the maturity of AI as a technology and the need for better design and review. With AI becoming almost a black box to most engineers given the complexity of the high number of parameters and nodes, Explainable AI brings a set of tools and frameworks to help understand predictions made by machine learning models. Explainability shows how significant each of the parameters and nodes contribute to the final decision. This helps debug and improve model performance and understand the model's behavior. Interpretability communicates the extent to which a cause and effect can be observed within a system. i.e. the extent to which you can predict what will happen, given a change in input or algorithmic parameters. Together, they help understand how the model arrived at a decision and how each step contributed to that. Try over 100s of Explainability and Interpretability solutions on kandi to make your next big decision.

shapby slundberg

Jupyter Notebook doticonstar image 19415 doticonVersion:v0.41.0doticon
License: Permissive (MIT)

A game theoretic approach to explain the output of any machine learning model.

Support
    Quality
      Security
        License
          Reuse

            shapby slundberg

            Jupyter Notebook doticon star image 19415 doticonVersion:v0.41.0doticon License: Permissive (MIT)

            A game theoretic approach to explain the output of any machine learning model.
            Support
              Quality
                Security
                  License
                    Reuse

                      limeby marcotcr

                      JavaScript doticonstar image 10684 doticonVersion:0.2.0.0doticon
                      License: Permissive (BSD-2-Clause)

                      Lime: Explaining the predictions of any machine learning classifier

                      Support
                        Quality
                          Security
                            License
                              Reuse

                                limeby marcotcr

                                JavaScript doticon star image 10684 doticonVersion:0.2.0.0doticon License: Permissive (BSD-2-Clause)

                                Lime: Explaining the predictions of any machine learning classifier
                                Support
                                  Quality
                                    Security
                                      License
                                        Reuse

                                          interpretby interpretml

                                          C++ doticonstar image 5539 doticonVersion:v0.4.2doticon
                                          License: Permissive (MIT)

                                          Fit interpretable models. Explain blackbox machine learning.

                                          Support
                                            Quality
                                              Security
                                                License
                                                  Reuse

                                                    interpretby interpretml

                                                    C++ doticon star image 5539 doticonVersion:v0.4.2doticon License: Permissive (MIT)

                                                    Fit interpretable models. Explain blackbox machine learning.
                                                    Support
                                                      Quality
                                                        Security
                                                          License
                                                            Reuse

                                                              lucidby tensorflow

                                                              Jupyter Notebook doticonstar image 4535 doticonVersion:v0.3.10doticon
                                                              License: Permissive (Apache-2.0)

                                                              A collection of infrastructure and tools for research in neural network interpretability.

                                                              Support
                                                                Quality
                                                                  Security
                                                                    License
                                                                      Reuse

                                                                        lucidby tensorflow

                                                                        Jupyter Notebook doticon star image 4535 doticonVersion:v0.3.10doticon License: Permissive (Apache-2.0)

                                                                        A collection of infrastructure and tools for research in neural network interpretability.
                                                                        Support
                                                                          Quality
                                                                            Security
                                                                              License
                                                                                Reuse

                                                                                  shapashby MAIF

                                                                                  Jupyter Notebook doticonstar image 2187 doticonVersion:v2.3.4doticon
                                                                                  License: Permissive (Apache-2.0)

                                                                                  πŸ”… Shapash makes Machine Learning models transparent and understandable by everyone

                                                                                  Support
                                                                                    Quality
                                                                                      Security
                                                                                        License
                                                                                          Reuse

                                                                                            shapashby MAIF

                                                                                            Jupyter Notebook doticon star image 2187 doticonVersion:v2.3.4doticon License: Permissive (Apache-2.0)

                                                                                            πŸ”… Shapash makes Machine Learning models transparent and understandable by everyone
                                                                                            Support
                                                                                              Quality
                                                                                                Security
                                                                                                  License
                                                                                                    Reuse
                                                                                                      Python doticonstar image 1834 doticonVersion:v0.4.2.2doticon
                                                                                                      License: Permissive (MIT)

                                                                                                      Quickly build Explainable AI dashboards that show the inner workings of so-called "blackbox" machine learning models.

                                                                                                      Support
                                                                                                        Quality
                                                                                                          Security
                                                                                                            License
                                                                                                              Reuse

                                                                                                                explainerdashboardby oegedijk

                                                                                                                Python doticon star image 1834 doticonVersion:v0.4.2.2doticon License: Permissive (MIT)

                                                                                                                Quickly build Explainable AI dashboards that show the inner workings of so-called "blackbox" machine learning models.
                                                                                                                Support
                                                                                                                  Quality
                                                                                                                    Security
                                                                                                                      License
                                                                                                                        Reuse

                                                                                                                          DrWhyby ModelOriented

                                                                                                                          R doticonstar image 628 doticonVersion:Currentdoticon
                                                                                                                          no licences License: No License (null)

                                                                                                                          DrWhy is the collection of tools for eXplainable AI (XAI). It's based on shared principles and simple grammar for exploration, explanation and visualisation of predictive models.

                                                                                                                          Support
                                                                                                                            Quality
                                                                                                                              Security
                                                                                                                                License
                                                                                                                                  Reuse

                                                                                                                                    DrWhyby ModelOriented

                                                                                                                                    R doticon star image 628 doticonVersion:Currentdoticonno licences License: No License

                                                                                                                                    DrWhy is the collection of tools for eXplainable AI (XAI). It's based on shared principles and simple grammar for exploration, explanation and visualisation of predictive models.
                                                                                                                                    Support
                                                                                                                                      Quality
                                                                                                                                        Security
                                                                                                                                          License
                                                                                                                                            Reuse