my-neural-net | Initial implementation of a combination from HTM | Reinforcement Learning library
kandi X-RAY | my-neural-net Summary
kandi X-RAY | my-neural-net Summary
Initial implementation of a combination from HTM and RL for a Software Agent in NUPIC.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Compute predictions for a given timestep
- Calculate learning rate based on the given segments
- Returns a subset of the cells with the minimum number of segments tied to the minicolumn
- Choose the best pair of matching segments
- Returns the Spec object
- Get additional parameters for a spatial pooler
- Builds the arguments for the given f
- Return default spec for TemporalPoolerRegion
- Performs the proximal dendrite
- Decrement the pooling activation
- Add to the pooling activation
- Adapts synapses for synapses
- Compute next record
- Apply filter to image data
- Gets the next record from the data source
- Read from protobuf
- Create a Spatial Pooler instance from a proto
- Returns the pooling implementation
- Calculates the number of connections for each input to column
- Generate a random potential pool
- Returns the number of cells in the table
- Returns the number of columns in the table
- Writes the table to the given proto
- Write the Union TemporalPoolerPooler to the given proto
- Project a matrix onto a subspace
my-neural-net Key Features
my-neural-net Examples and Code Snippets
Community Discussions
Trending Discussions on my-neural-net
QUESTION
I trained a model, got decent results, but then I got greedy and I wanted even more accuracy, so, I trained the model for longer, and longer and longer, but to no avail, nothing happens! according to theory, at some point, the validation accuracy must start to decrease after too much training (the loss start to INCREASE)! but this never seem to happen. So, I figured may be the NN is too simple to ever be able to overfit, so I increased its capacity and I ended up with millions of parameters, and I trained it for 10,000 epochs, still no overfitting happens.
The same question was asked here, but the answers there are anything but satisfying.
What does that mean?
...ANSWER
Answered 2019-Apr-22 at 13:10It is a known thing with high capacity models. They are suprisingly resistant to overfitting which contradicts to the classical statistical learning theory that says that without explicit regularization you going to overfit. For example, this paper says
most of deep neural networks with learned parameters often generalize very well empirically, even equipped with much more effective parameters than the number of training samples, i.e. high capacity... Thus, statistical learning theory cannot explain the generalization ability of deep learning models.
Also, this and this papers are talking about it. You can keep on following the references in these papers to read more.
Personally, I have never seen high capacity model overfits even after training for 10s of thousands of epochs. If you want the example that does overfit: take Lenet 5 for Cifar10 with ReLU activations and without dropout and train it using SGD with learning rate 0.01
. The number of training parameters in this model is ~ 60000 thousand which is the same as the number of samples in Cifar10 (low capacity model). After at most 500-1000 epochs you are going to see a very clear overfitting with increasing loss and error over time.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install my-neural-net
Parameterize verbosity level of debug print-out (e.g. Indices)
Refractor code and documentation (simplify some components that are based on NUPIC-components)
Support/Optimize parallel training of multiple agents in the cloud.
Finish serialization implementation (SparseMatrixConnections from NUPIC Core missing)
Add support for Player guided exploring
Advance visualization and debug tools
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page