BackPropNetwork | Backpropagation using Stochastic gradient descent
kandi X-RAY | BackPropNetwork Summary
kandi X-RAY | BackPropNetwork Summary
Backpropagation using Stochastic gradient descent
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of BackPropNetwork
BackPropNetwork Key Features
BackPropNetwork Examples and Code Snippets
Community Discussions
Trending Discussions on BackPropNetwork
QUESTION
So I am hitting a wall with my C# Machine Learning project. I am attempting to train an algorithm to recognize numbers. Since this is only an exercise I have a image set of 200 numbers (20 each for 0 to 9). Obviously if I wanted a properly trained algorithm I would use a more robust training set, but this is just an exercise to see if I can get it working in the first place. I can get it up to 60% accuracy, but not past that. I have been doing some research into activation functions and I from what I understand, LeakyRelu is the function I should be using. However, if I use the LeakyRelu function across the board then it doesn't learn anything, and I'm not sure how to use the LeakyRelu as an output activation function. Using sigmoid or tanh as an output activation function makes more sense to me. Here is a block of code that creates the array that feeds the backpropagation:
...ANSWER
Answered 2020-Jun-02 at 09:04added the answer for future visitors
Try converting the grayscale values from 0-255 interval to 0-1 interval. Just divide each pixel with 255. the fact that LeakyRELU performed better than sigmoid or tanh is because the values are too large. large in a sense that they get mistreated by tanh and sigmoid and get rounded by the computer to integers.
Look carefully about how the neural network weights are initialised if you intend to use tanh or sigmoid.
Since this is a classification problem, I recommend you use a softmax activation function in your output layer.
after preprocessing the data, @JMC0352 got 88% accuracy only.
the reason why you're getting 88% only is because neural network (alone) is not suitable for image recognition. convolutional neural networks are used for that matter. In order to understand the problem intuitively, you can picture raw neural networks as making sense of all the pixels together, where as conv. nets. make sense of relatively close pixels.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install BackPropNetwork
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page