LSTM-Neural-Network-for-Time-Series-Prediction | LSTM built using Keras Python package to predict time | Machine Learning library
kandi X-RAY | LSTM-Neural-Network-for-Time-Series-Prediction Summary
kandi X-RAY | LSTM-Neural-Network-for-Time-Series-Prediction Summary
LSTM built using the Keras Python package to predict time series steps and sequences. Includes sine wave and stock market data.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Builds the model
- Stop the timer
- Start the timer
- Get test data
- Normalise a list of windows
- Generate training data
- Get next window
- Train the model
- Train the generator
- Get training data
- Predict sequences with multiple steps
- Plot the results of multiple predictions
LSTM-Neural-Network-for-Time-Series-Prediction Key Features
LSTM-Neural-Network-for-Time-Series-Prediction Examples and Code Snippets
Community Discussions
Trending Discussions on LSTM-Neural-Network-for-Time-Series-Prediction
QUESTION
My question is very similar to what it seems this post is asking, although that post doesn't pose a satisfactory solution. To elaborate, I am currently using keras with tensorflow backend and a sequential LSTM model. The end goal is I have n time-dependent sequences with equal time steps (the same number of points on each sequence and the points are all the same time apart) and I would like to feed all n sequences into the same network so it can use correlations between the sequences to better predict the next step for each sequence. My ideal output would be an n-element 1-D array with array[0] corresponding to the next-step prediction for sequence_1, array[1] for sequence_2, and so on.
My inputs are sequences of single values, so each of n inputs can be parsed into a 1-D array.
I was able to get a working model for each sequence independently using the code at the end of this guide by Jakob Aungiers, although my difficulty is adapting it to accept multiple sequences at once and correlate between them (i.e. be analyzed in parallel). I believe the issue is related to the shape of my input data, which is currently in the form of a 4-D numpy array because of how Jakob's Guide splits the inputs into sub-sequences of 30 elements each to analyze incrementally, although I could also be completely missing the target here. My code (which is mostly Jakob's, not trying to take credit for anything that isn't mine) presently looks like this:
As-is this complains with "ValueError: Error when checking target: expected activation_1 to have shape (None, 4) but got array with shape (4, 490)", I'm sure there are plenty of other issues but I'd love some direction on how to achieve what I'm describing. Anything stick out immediately to anyone? Any help you could give will be greatly appreciated.
Thanks!
-Eric
...ANSWER
Answered 2017-Oct-23 at 19:45Keras is already prepared to work with batches containing many sequences, there is no secret at all.
There are two possible approaches, though:
- You input your entire sequences (all steps at once) and predict n results
- You input only one step of all sequences and predict the next step in a loop
QUESTION
Most of my code is based on this article and the issue I'm asking about is evident there, but also in my own testing. It is a sequential model with LSTM layers.
Here is a plotted prediction over real data from a model that was trained with around 20 small data sets for one epoch.
Here is another plot but this time with a model trained on more data for 10 epochs.
What causes this and how can I fix it? Also that first link I sent shows the same result at the bottom - 1 epoch does great and 3500 epochs is terrible.
Furthermore, when I run a training session for the higher data count but with only 1 epoch, I get identical results to the second plot.
What could be causing this issue?
...ANSWER
Answered 2018-Jul-16 at 16:24A few questions:
- Is this graph for training data or validation data?
- Do you consider it better because:
- The graph seems cool?
- You actually have a better "loss" value?
- If so, was it training loss?
- Or validation loss?
The early graph seems interesting, indeed, but take a close look at it:
I clearly see huge predicted valleys where the expected data should be a peak
Is this really better? It sounds like a random wave that is completely out of phase, meaning that a straight line would indeed represent a better loss than this.
Take a look a the "training loss", this is what can surely tell you if your model is better or not.
If this is the case and your model isn't reaching the desired output, then you should probably make a more capable model (more layers, more units, a different method, etc.). But be aware that many datasets are simply too random to be learned, no matter how good the model.
Overfitting - Training loss gets better, but validation loss gets worseIn case you actually have a better training loss. Ok, so your model is indeed getting better.
- Are you plotting training data? - Then this straight line is actually better than a wave out of phase
- Are you plotting validation data?
- What is happening with the validation loss? Better or worse?
If your "validation" loss is getting worse, your model is overfitting. It's memorizing the training data instead of learning generally. You need a less capable model, or a lot of "dropout".
Often, there is an optimal point where the validation loss stops going down, while the training loss keeps going down. This is the point to stop training if you're overfitting. Read about the EarlyStopping
callback in keras documentation.
If your training loss is going up, then you've got a real problem there, either a bug, a badly prepared calculation somewhere if you're using custom layers, or simply a learning rate that is too big.
Reduce the learning rate (divide it by 10, or 100), create and compile a "new" model and restart training.
Another problem?Then you need to detail your question properly.
QUESTION
I've been following the tutorial here and I have data in and I want to predict future data from everything that I currently have of the test set.
Here is the code I have now. I am completely new to ML and python (I usually do Java) so this is like reading Chinese, but I've copied and pasted it. Currently, it predicts from the start of the data, but I want it to start at the end.
...ANSWER
Answered 2018-Mar-26 at 02:05What is currently going on is that the current frame at each iteration of the loop is set to every remaining data except the current one. We can keep the same structure and just flip each row.
At the beginning of your function, you can reverse the data to make the predictions start from the end and go towards the beginning by adding the line data = np.flip(data, axis=0)
which reverses each row of data
.
QUESTION
I have the following code that I am hoping to get a forward pass from a 2 layer LSTM:
...ANSWER
Answered 2017-Dec-13 at 16:40I realised what the problem was. I was trying to extract my model weights using Tensorflow session (after model fitting), rather than via Keras methods directly. This resulted in weights matrices that made perfect sense (dimension wise) but contained the values from initialization step.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install LSTM-Neural-Network-for-Time-Series-Prediction
You can use LSTM-Neural-Network-for-Time-Series-Prediction like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page