hierarchical-attention-networks | TensorFlow implementation of the paper | Natural Language Processing library
kandi X-RAY | hierarchical-attention-networks Summary
kandi X-RAY | hierarchical-attention-networks Summary
This is an implementation of the paper Hierarchical Attention Networks for Document Classification, NAACL 2016.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Attention layer
- Get the shape of a tensor
- Create a sequence masking
- Create a train function
- Count the number of trainable parameters
- Generate a dictionary of feed data
- Normalize a batch of documents
- Reads a train set
- Create an iterator for a batch of documents
- Builds the vocabulary
- Read data from a data file
- Reads the test set
- Evaluate the function
- Calculate softmax cross entropy
- Read a vocabulary from file
- Process data and save to disk
- Load a GloNN word embedding file
- Reads the validation set
hierarchical-attention-networks Key Features
hierarchical-attention-networks Examples and Code Snippets
Community Discussions
Trending Discussions on hierarchical-attention-networks
QUESTION
To be clear, I am referring to "self-attention" of the type described in Hierarchical Attention Networks for Document Classification and implemented many places, for example: here. I am not referring to the seq2seq type of attention used in encoder-decoder models (i.e. Bahdanau), although my question might apply to that as well... I am just not as familiar with it.
Self-attention basically just computes a weighted average of RNN hidden states (a generalization of mean-pooling, i.e. un-weighted average). When there are variable length sequences in the same batch, they will typically be zero-padded to the length of the longest sequence in the batch (if using dynamic RNN). When the attention weights are computed for each sequence, the final step is a softmax, so the attention weights sum to 1.
However, in every attention implementation I have seen, there is no care taken to mask out, or otherwise cancel, the effects of the zero-padding on the attention weights. This seems wrong to me, but I fear maybe I am missing something since nobody else seems bothered by this.
For example, consider a sequence of length 2, zero-padded to length 5. Ultimately this leads to the attention weights being computed as the softmax of a similarly 0-padded vector, e.g.:
weights = softmax([0.1, 0.2, 0, 0, 0]) = [0.20, 0.23, 0.19, 0.19, 0.19]
and because exp(0)=1, the zero-padding in effect "waters down" the attention weights. This can be easily fixed, after the softmax operation, by multiplying the weights with a binary mask, i.e.
mask = [1, 1, 0, 0, 0]
and then re-normalizing the weights to sum to 1. Which would result in:
weights = [0.48, 0.52, 0, 0, 0]
When I do this, I almost always see a performance boost (in the accuracy of my models - I am doing document classification/regression). So why does nobody do this?
For a while I considered that maybe all that matters is the relative values of the attention weights (i.e., ratios), since the gradient doesn't pass through the zero-padding anyway. But then why would we use softmax at all, as opposed to just exp(.), if normalization doesn't matter? (plus, that wouldn't explain the performance boost...)
...ANSWER
Answered 2018-Apr-17 at 21:25Great question! I believe your concern is valid and zero attention scores for the padded encoder outputs do affect the attention. However, there are few aspects that you have to keep in mind:
There are different score functions, the one in tf-rnn-attention uses simple linear + tanh + linear transformation. But even this score function can learn to output negative scores. If you look at the code and imagine
inputs
consists of zeros, vectorv
is not necessarily zero due to bias and the dot product withu_omega
can boost it further to low negative numbers (in other words, plain simple NN with a non-linearity can make both positive and negative predictions). Low negative scores don't water down the high scores in softmax.Due to bucketing technique, the sequences within a bucket usually have roughly the same length, so it's unlikely to have half of the input sequence padded with zeros. Of course, it doesn't fix anything, it just means that in real applications negative effect from the padding is naturally limited.
You mentioned it in the end, but I'd like to stress it too: the final attended output is the weighted sum of encoder outputs, i.e. relative values actually matter. Take your own example and compute the weighted sum in this case:
- the first one is
0.2 * o1 + 0.23 * o2
(the rest is zero) - the second one is
0.48 * o1 + 0.52 * o2
(the rest is zero too)
Yes, the magnitude of the second vector is two times bigger and it isn't a critical issue, because it goes then to the linear layer. But relative attention ono2
is just 7% higher, than it would have been with masking.What this means is that even if the attention weights won't do a good job in learning to ignore zero outputs, the end effect on the output vector is still good enough for the decoder to take the right outputs into account, in this case to concentrate on
o2
.- the first one is
Hope this convinces you that re-normalization isn't that critical, though probably will speed-up learning if actually applied.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install hierarchical-attention-networks
You can use hierarchical-attention-networks like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page