hierarchical-attention-networks | Document classification with Hierarchical Attention Networks | Natural Language Processing library

 by   ematvey Python Version: Current License: MIT

kandi X-RAY | hierarchical-attention-networks Summary

kandi X-RAY | hierarchical-attention-networks Summary

hierarchical-attention-networks is a Python library typically used in Artificial Intelligence, Natural Language Processing, Deep Learning, Tensorflow applications. hierarchical-attention-networks has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. You can download it from GitHub.

Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is currently unmaintained, issues will probably not be addressed.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              hierarchical-attention-networks has a low active ecosystem.
              It has 443 star(s) with 148 fork(s). There are 29 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 14 open issues and 11 have been closed. On average issues are closed in 61 days. There are 2 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of hierarchical-attention-networks is current.

            kandi-Quality Quality

              hierarchical-attention-networks has 0 bugs and 0 code smells.

            kandi-Security Security

              hierarchical-attention-networks has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              hierarchical-attention-networks code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              hierarchical-attention-networks is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              hierarchical-attention-networks releases are not available. You will need to build from source code and install.
              Build file is available. You can build the component from source.
              Installation instructions are not available. Examples and code snippets are available.
              hierarchical-attention-networks saves you 260 person hours of effort in developing the same functionality from scratch.
              It has 630 lines of code, 40 functions and 8 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed hierarchical-attention-networks and discovered the below as its top functions. This is intended to give you an instant insight into hierarchical-attention-networks implemented functionality, and help decide if they suit your requirements.
            • Train TensorBoard
            • Reads a dataset
            • Calculate batch of documents
            • Get feed data
            • Generator for batch_iterator
            • Evaluate the prediction
            • Read the devset
            • Read a train dataset
            • Make training data
            • Build a word frequency distribution
            • Build a vocabulary
            • Read review files
            • Evaluate the model
            • Reads a pickle file
            • Reads a train dataset
            • Return a dictionary containing the labels
            Get all kandi verified functions for this library.

            hierarchical-attention-networks Key Features

            No Key Features are available at this moment for hierarchical-attention-networks.

            hierarchical-attention-networks Examples and Code Snippets

            No Code Snippets are available at this moment for hierarchical-attention-networks.

            Community Discussions

            QUESTION

            Should RNN attention weights over variable length sequences be re-normalized to "mask" the effects of zero-padding?
            Asked 2019-Apr-15 at 21:29

            To be clear, I am referring to "self-attention" of the type described in Hierarchical Attention Networks for Document Classification and implemented many places, for example: here. I am not referring to the seq2seq type of attention used in encoder-decoder models (i.e. Bahdanau), although my question might apply to that as well... I am just not as familiar with it.

            Self-attention basically just computes a weighted average of RNN hidden states (a generalization of mean-pooling, i.e. un-weighted average). When there are variable length sequences in the same batch, they will typically be zero-padded to the length of the longest sequence in the batch (if using dynamic RNN). When the attention weights are computed for each sequence, the final step is a softmax, so the attention weights sum to 1.

            However, in every attention implementation I have seen, there is no care taken to mask out, or otherwise cancel, the effects of the zero-padding on the attention weights. This seems wrong to me, but I fear maybe I am missing something since nobody else seems bothered by this.

            For example, consider a sequence of length 2, zero-padded to length 5. Ultimately this leads to the attention weights being computed as the softmax of a similarly 0-padded vector, e.g.:

            weights = softmax([0.1, 0.2, 0, 0, 0]) = [0.20, 0.23, 0.19, 0.19, 0.19]

            and because exp(0)=1, the zero-padding in effect "waters down" the attention weights. This can be easily fixed, after the softmax operation, by multiplying the weights with a binary mask, i.e.

            mask = [1, 1, 0, 0, 0]

            and then re-normalizing the weights to sum to 1. Which would result in:

            weights = [0.48, 0.52, 0, 0, 0]

            When I do this, I almost always see a performance boost (in the accuracy of my models - I am doing document classification/regression). So why does nobody do this?

            For a while I considered that maybe all that matters is the relative values of the attention weights (i.e., ratios), since the gradient doesn't pass through the zero-padding anyway. But then why would we use softmax at all, as opposed to just exp(.), if normalization doesn't matter? (plus, that wouldn't explain the performance boost...)

            ...

            ANSWER

            Answered 2018-Apr-17 at 21:25

            Great question! I believe your concern is valid and zero attention scores for the padded encoder outputs do affect the attention. However, there are few aspects that you have to keep in mind:

            • There are different score functions, the one in tf-rnn-attention uses simple linear + tanh + linear transformation. But even this score function can learn to output negative scores. If you look at the code and imagine inputs consists of zeros, vector v is not necessarily zero due to bias and the dot product with u_omega can boost it further to low negative numbers (in other words, plain simple NN with a non-linearity can make both positive and negative predictions). Low negative scores don't water down the high scores in softmax.

            • Due to bucketing technique, the sequences within a bucket usually have roughly the same length, so it's unlikely to have half of the input sequence padded with zeros. Of course, it doesn't fix anything, it just means that in real applications negative effect from the padding is naturally limited.

            • You mentioned it in the end, but I'd like to stress it too: the final attended output is the weighted sum of encoder outputs, i.e. relative values actually matter. Take your own example and compute the weighted sum in this case:

              • the first one is 0.2 * o1 + 0.23 * o2 (the rest is zero)
              • the second one is 0.48 * o1 + 0.52 * o2 (the rest is zero too)


              Yes, the magnitude of the second vector is two times bigger and it isn't a critical issue, because it goes then to the linear layer. But relative attention on o2 is just 7% higher, than it would have been with masking.

              What this means is that even if the attention weights won't do a good job in learning to ignore zero outputs, the end effect on the output vector is still good enough for the decoder to take the right outputs into account, in this case to concentrate on o2.

            Hope this convinces you that re-normalization isn't that critical, though probably will speed-up learning if actually applied.

            Source https://stackoverflow.com/questions/49522673

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install hierarchical-attention-networks

            You can download it from GitHub.
            You can use hierarchical-attention-networks like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/ematvey/hierarchical-attention-networks.git

          • CLI

            gh repo clone ematvey/hierarchical-attention-networks

          • sshUrl

            git@github.com:ematvey/hierarchical-attention-networks.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Consider Popular Natural Language Processing Libraries

            transformers

            by huggingface

            funNLP

            by fighting41love

            bert

            by google-research

            jieba

            by fxsjy

            Python

            by geekcomputers

            Try Top Libraries by ematvey

            pybacktest

            by ematveyPython

            gostat

            by ematveyGo

            ai-copywriter

            by ematveyPython

            go-fn

            by ematveyGo

            pytorch-rnn

            by ematveyPython