captum | Model interpretability and understanding for PyTorch | Machine Learning library

 by   pytorch Python Version: 0.7.0 License: BSD-3-Clause

kandi X-RAY | captum Summary

kandi X-RAY | captum Summary

captum is a Python library typically used in Artificial Intelligence, Machine Learning, Pytorch applications. captum has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can install using 'pip install captum' or download it from GitHub, PyPI.

Captum provides a web interface called Insights for easy visualization and access to a number of our interpretability algorithms. To analyze a sample model on CIFAR10 via Captum Insights run. and navigate to the URL specified in the output. To build Insights you will need Node >= 8.x and Yarn >= 1.5. To build and launch from a checkout in a conda environment run.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              captum has a medium active ecosystem.
              It has 3873 star(s) with 419 fork(s). There are 216 watchers for this library.
              There were 1 major release(s) in the last 12 months.
              There are 134 open issues and 307 have been closed. On average issues are closed in 61 days. There are 29 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of captum is 0.7.0

            kandi-Quality Quality

              captum has no bugs reported.

            kandi-Security Security

              captum has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              captum is licensed under the BSD-3-Clause License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              captum releases are available to install and integrate.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi has reviewed captum and discovered the below as its top functions. This is intended to give you an instant insight into captum implemented functionality, and help decide if they suit your requirements.
            • Plot a timeseries attribute
            • Calculate the threshold for a given percentile
            • Calculate the sum of the attributes
            • Normalize attr
            • Compute the gradients of the given layer
            • Forward a forward layer on inputs
            • Helper method to extract device IDs from saved_layer
            • Train a linear model
            • Construct model parameters
            • Generate tutorial
            • Returns the self - influence of the input dataset
            • Returns the self - influence for the input dataset
            • Compute the influence of inputs
            • Returns an HTML representation of the given examples
            • Linear search
            • Generate dataset activations
            • Train a sklearn model
            • Compute the influence route
            • Computes the influence route
            • Calculate the k - most influence
            • Visualize an image attribute
            • Returns k - most - influential results
            • Set the projection for tracincp
            • Binary search
            • Attaches from inputs to target
            • Returns the kmost influential results
            Get all kandi verified functions for this library.

            captum Key Features

            No Key Features are available at this moment for captum.

            captum Examples and Code Snippets

            LatinIRB,DESCRIPTION
            Rubydot img1Lines of Code : 2dot img1no licencesLicense : No License
            copy iconCopy
            AFIRST.active_voice_indicative_mood_present_tense_first_person_singular_number #=> amō
            
            AFIRST.actindic
              
            pytorch_geometric - captum explainability
            Pythondot img2Lines of Code : 69dot img2License : Permissive (MIT License)
            copy iconCopy
            import os.path as osp
            
            import matplotlib.pyplot as plt
            import torch
            import torch.nn.functional as F
            from captum.attr import IntegratedGradients
            
            import torch_geometric.transforms as T
            from torch_geometric.datasets import Planetoid
            from torch_geometri  
            Syntax Error when calling the name of a model's layer in captum
            Pythondot img3Lines of Code : 2dot img3License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            model.dl.backbone.layer4[2]conv3
            
            No Module named numpy in conda env
            Pythondot img4Lines of Code : 4dot img4License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            conda remove numpy
            
            pip install numpy
            

            Community Discussions

            QUESTION

            Using captum with nn.Embedding getting RuntimeError
            Asked 2021-Oct-27 at 13:22

            I am using captum library and getting following error. Here is the complete code to reproduce the error.

            RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

            ...

            ANSWER

            Answered 2021-Oct-27 at 13:22

            I resolved the issue with LayerIntegratedGradients.

            Here is the link to read more to know other possible solutions. https://captum.ai/tutorials/IMDB_TorchText_Interpret

            This is using an instance of LayerIntegratedGradients using forward function of model and the embedding layer as the example given in the link.

            Here is sample code which using LayerIntegratedGradients with nn.Embedding

            Source https://stackoverflow.com/questions/69664738

            QUESTION

            Syntax Error when calling the name of a model's layer in captum
            Asked 2021-Aug-26 at 14:57

            I'm trying to use the gradCAM feature of captum for PyTorch. Previously, I asked the question of how to find the name of layers in pyTorch (which is done using model.named_modules()). However, since getting the names of the modules (my model name is 'model') I have tried to use it with LayerGradCam from captum and am receiving a syntax error - it seems to always happen on the 'number' within the model name.

            I import the function with:

            ...

            ANSWER

            Answered 2021-Aug-26 at 14:57

            Array or list indexing is done using [] syntax, not ..

            Source https://stackoverflow.com/questions/68940728

            QUESTION

            How to find the name of layers in preloaded torchvision models?
            Asked 2021-Aug-25 at 14:39

            I'm trying to use GradCAM with a Deeplabv3 resnet50 model preloaded from torchvision, but in Captum I need to say the name of the layer (of type nn.module). I can't find any documentation for how this is done, does anyone possibly have any ideas of how to get the name of the final ReLu layer?

            Thanks in advance!

            ...

            ANSWER

            Answered 2021-Aug-25 at 14:39

            You can have a look at its representation and get an idea of where it's located by simply printing it:

            Source https://stackoverflow.com/questions/68924829

            QUESTION

            Using Captum with Pytorch Lightning?
            Asked 2020-Oct-08 at 17:04

            So I tried to use Captum with PyTorch Lightning. I am having issues when passing the Module to Captum, since it seems to do weird reshaping of the tensors. For example in the below minimal example, the lightning code works easy and well. But when I use IntegratedGradient with "n_step>=1" I get an issue. The code of the LighningModule is not that important I would say, I wonder more at the code line at the very bottom.

            Does anyone know how to work around this?

            ...

            ANSWER

            Answered 2020-Oct-08 at 17:04

            The solution was to wrap the forward function. Make sure that the shape going into the mode.foward() is correct!

            Source https://stackoverflow.com/questions/64259774

            QUESTION

            Problems with CSS multicolumn with tiled background image
            Asked 2020-Jun-12 at 01:16

            I'd like to make a carousel-type scrolling horizontal card view using CSS multicol with column-width, and use a repeating background (such as a white background with a black border) on the element, but I'm having problems.

            The first problem is the background does not tile horizontally past the page width. If I set a width on the multicol element the background repeats to that extent, but that interferes with the natural width.

            The second problem is the column widths change when I horizontally resize the window. I can tell it's trying to tile the columns in a pretty way but I need the widths not to do that or my background gets out of sync.

            ...

            ANSWER

            Answered 2020-Jun-12 at 01:16

            Although there are still bugs I'm tracking down in Safari involving the CSS --variables, I feel I have been able to find an answer to the question! Try it for yourself.

            Source https://stackoverflow.com/questions/62335051

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install captum

            The latest release of Captum is easily installed either via Anaconda (recommended):.
            Python >= 3.6
            PyTorch >= 1.2
            pip install -e .[insights]: Also installs all packages necessary for running Captum Insights.
            pip install -e .[dev]: Also installs all tools necessary for development (testing, linting, docs building; see Contributing below).
            pip install -e .[tutorials]: Also installs all packages necessary for running the tutorial notebooks.
            Captum helps you interpret and understand predictions of PyTorch models by exploring features that contribute to a prediction the model makes. It also helps understand which neurons and layers are important for model predictions. Let's apply some of those algorithms to a toy model we have created for demonstration purposes. For simplicity, we will use the following architecture, but users are welcome to use any PyTorch model of their choice. Let's create an instance of our model and set it to eval mode. Next, we need to define simple input and baseline tensors. Baselines belong to the input space and often carry no predictive signal. Zero tensor can serve as a baseline for many tasks. Some interpretability algorithms such as Integrated Gradients, Deeplift and GradientShap are designed to attribute the change between the input and baseline to a predictive class or a value that the neural network outputs. We will apply model interpretability algorithms on the network mentioned above in order to understand the importance of individual neurons/layers and the parts of the input that play an important role in the final prediction. To make computations deterministic, let's fix random seeds. Let's define our input and baseline tensors. Baselines are used in some interpretability algorithms such as IntegratedGradients, DeepLift, GradientShap, NeuronConductance, LayerConductance, InternalInfluence and NeuronIntegratedGradients. Next we will use IntegratedGradients algorithms to assign attribution scores to each input feature with respect to the first target output. The algorithm outputs an attribution score for each input element and a convergence delta. The lower the absolute value of the convergence delta the better is the approximation. If we choose not to return delta, we can simply not provide the return_convergence_delta input argument. The absolute value of the returned deltas can be interpreted as an approximation error for each input sample. It can also serve as a proxy of how accurate the integral approximation for given inputs and baselines is. If the approximation error is large, we can try a larger number of integral approximation steps by setting n_steps to a larger value. Not all algorithms return approximation error. Those which do, though, compute it based on the completeness property of the algorithms. Positive attribution score means that the input in that particular position positively contributed to the final prediction and negative means the opposite. The magnitude of the attribution score signifies the strength of the contribution. Zero attribution score means no contribution from that particular feature. Similarly, we can apply GradientShap, DeepLift and other attribution algorithms to the model. GradientShap first chooses a random baseline from baselines' distribution, then adds gaussian noise with std=0.09 to each input example n_samples times. Afterwards, it chooses a random point between each example-baseline pair and computes the gradients with respect to target class (in this case target=0). Resulting attribution is the mean of gradients * (inputs - baselines).

            Support

            See the CONTRIBUTING file for how to help out.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install captum

          • CLONE
          • HTTPS

            https://github.com/pytorch/captum.git

          • CLI

            gh repo clone pytorch/captum

          • sshUrl

            git@github.com:pytorch/captum.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link