AutoEncoder-with-pytorch
kandi X-RAY | AutoEncoder-with-pytorch Summary
kandi X-RAY | AutoEncoder-with-pytorch Summary
AutoEncoder-with-pytorch
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Forward function
- Convert a NumPy NumPy array to a bit array
- Mean squared error
AutoEncoder-with-pytorch Key Features
AutoEncoder-with-pytorch Examples and Code Snippets
Community Discussions
Trending Discussions on AutoEncoder-with-pytorch
QUESTION
Specifically what spurred this question is the return_sequence
argument of TensorFlow's version of an LSTM layer.
The docs say:
Boolean. Whether to return the last output. in the output sequence, or the full sequence. Default: False.
I've seen some implementations, especially autoencoders that use this argument to strip everything but the last element in the output sequence as the output of the 'encoder' half of the autoencoder.
Below are three different implementations. I'd like to understand the reasons behind the differences, as the seem like very large differences but all call themselves the same thing.
Example 1 (TensorFlow):This implementation strips away all outputs of the LSTM except the last element of the sequence, and then repeats that element some number of times to reconstruct the sequence:
...ANSWER
Answered 2020-Dec-08 at 15:43There is no official or correct way of designing the architecture of an LSTM based autoencoder... The only specifics the name provides is that the model should be an Autoencoder and that it should use an LSTM layer somewhere.
The implementations you found are each different and unique on their own even though they could be used for the same task.
Let's describe them:
TF implementation:
- It assumes the input has only one channel, meaning that each element in the sequence is just a number and that this is already preprocessed.
- The default behaviour of the
LSTM layer
in Keras/TF is to output only the last output of the LSTM, you could set it to output all the output steps with thereturn_sequences
parameter. - In this case the input data has been shrank to
(batch_size, LSTM_units)
- Consider that the last output of an LSTM is of course a function of the previous outputs (specifically if it is a stateful LSTM)
- It applies a
Dense(1)
in the last layer in order to get the same shape as the input.
PyTorch 1:
- They apply an embedding to the input before it is fed to the LSTM.
- This is standard practice and it helps for example to transform each input element to a vector form (see word2vec for example where in a text sequence, each word that isn't a vector is mapped into a vector space). It is only a preprocessing step so that the data has a more meaningful form.
- This does not defeat the idea of the LSTM autoencoder, because the embedding is applied independently to each element of the input sequence, so it is not encoded when it enters the LSTM layer.
PyTorch 2:
- In this case the input shape is not
(seq_len, 1)
as in the first TF example, so the decoder doesn't need a dense after. The author used a number of units in the LSTM layer equal to the input shape.
- In this case the input shape is not
In the end you choose the architecture of your model depending on the data you want to train on, specifically: the nature (text, audio, images), the input shape, the amount of data you have and so on...
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install AutoEncoder-with-pytorch
You can use AutoEncoder-with-pytorch like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page