levenberg-marquardt | Curve fitting method in JavaScript
kandi X-RAY | levenberg-marquardt Summary
kandi X-RAY | levenberg-marquardt Summary
Curve fitting method in JavaScript
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Run the Levenshtein algorithm .
- Matrix function .
levenberg-marquardt Key Features
levenberg-marquardt Examples and Code Snippets
Community Discussions
Trending Discussions on levenberg-marquardt
QUESTION
I am trying to fit a Gaussian model onto gaussian distributed data (x,y) , using scipy's curve_fit. I am trying to tweak the parameters of the fitting, in order to get better fitting. I saw that curve_fit calls scipy.optimize.least_sq with the method LM (Levenberg-Marquardt method). It seems to me that it constructs a function that evaluates the least square criterion at each data point. In my example, I have 8 data points. In my comprehension and according to scipy's documentation gtol is "Orthogonality desired between the function vector and the columns of the Jacobian."
...ANSWER
Answered 2021-Dec-23 at 11:46From scipy/optimize/minpack/lmder.f, we find a more detailed description
QUESTION
I'm working on some code that I'm writing which uses the [GNU Scientific Library (GSL)][1]'s Nonlinear least-squares algorithm for curve fitting.
I have been successful in obtaining a working code that estimate the right parameters from the fitting analysis using a C++ wrapper from https://github.com/Eleobert/gsl-curve-fit/blob/master/example.cpp.
Now, I would like to fix some of the parameters of the function to be fit. And I would like to modify the function in such a way that I can already input the value of the parameter to be fixed.
Any idea on how to do? I'm showing here the full code.
This is the code for performing nonlinear least-squares fitting:
...ANSWER
Answered 2021-Sep-29 at 16:32Ok. Here's the answer based on the code linked in http://github.com/Eleobert/gsl-curve-fit/blob/master/example.cpp. However, this is not the code posted in the question: you should update your question accordingly so that others may take advantage from both the question & answer.
So, basically, the main problem is that GSL is a library written in pure C, whereas you use a high-level wrapper written in C++, published in the aforementioned link . While the wrapper is written pretty well in modern C++, it has one basic problem: it is "stiff" - it can be used only for a subclass of problems it was designed for, and this subclass is a rather narrow subset of the capabilities offered by the original C code.
Let's try to improve it a bit and start from how the wrapper is supposed to be used:
QUESTION
I am using lmfit for solving a non-linear optimization problem. It works fine to the point where I am trying to implement a measurement error as the standard deviation of the dependent variable y (sigma_y). My problem is, that I cannot interpret the Information criteria properly. When implementing the return (model - y)/(sigma_y)
they just raise from really low to very high values.
i.e. [left: return (model - y)
-weighting-> right: return (model - y)/(sigma_y)
]:
- chi-square 0.00159805 -> 47.3184972
- reduced chi-square 1.7756e-04 -> 5.25761080 expectation value is 1 || SO discussion
- Akaike info crit -93.2055413 -> 20.0490661 the more negative, the better
- Bayesian info crit -92.4097507 -> 20.8448566 the more negative, the better
My guess is, that this is somehow connected to bad usage of lmfit (wrong calculation of Information criteria, bad error scaling) or to a general lack of understanding these criteria (to me reduced chi-square of 5.258 (under-estimated) or 1.776e-4 (over-estimated) sounds like a really poor fit, but the plot of residuals etc. looks okay for me...)
Here is my example code that reproduces the problem:
...ANSWER
Answered 2021-Jun-12 at 02:48Well, in order for the magnitude of chi-square to be meaningful (for example, that it be around (N_data - N_varys), the scale of the uncertainties has to be correct -- giving the 1-sigma standard deviation for each observation.
It's not really possible for lmfit to detect whether the sigma you use has the right scale.
QUESTION
I'm using Eigen's Levenberg-Marquardt implementation and wondering how to set some boundaries on the parameters which should be optimized.
As I'm migrating some GNU octave programs to Eigen I expected that there might be some boundaries which can be easily provided as parameters to the module.
The layout of my implemenation is nearly the same as in this example. I'm not providing the df() implemenatation but rather use Eigen::NumericalDiff in order to approximate it.
So how do I enforce some boundaries on the parameters which are supplied to minimize()? I thought about setting the errors(fvec) in the operator() to some high values when leaving my expected ranges, but in some small tests this resulted in strange results.
...ANSWER
Answered 2021-Jun-04 at 08:42I found a solution which is at least working for me.
The idea is to increase the error vector once the parameters are leaving their sanity boundaries.
This can be achieved by the following function:
QUESTION
My code is very simple for one layer of 20 neurons:
...ANSWER
Answered 2021-Apr-20 at 11:24According to the documentations for feedforwardnet
, the default setting for this function is to train with the Levenberg-Marquardt backpropagation, aka. damped least-squares -- feedforwardnet(20, 'trainlm')
option.
As for the data split, the default seems to be a random 0.7-0.15-0.15 train-validation-test split, using the dividerand
function.
trainlm
page:
trainlm
is a network training function that updates weight and bias values according to Levenberg-Marquardt optimization.
trainlm
is often the fastest backpropagation algorithm in the toolbox, and is highly recommended as a first-choice supervised algorithm, although it does require more memory than other algorithms.
Training occurs according to trainlm training parameters, shown here with their default values:
net.trainParam.epochs
— Maximum number of epochs to train. The default value is 1000.net.trainParam.goal
— Performance goal. The default value is 0.net.trainParam.max_fail
— Maximum validation failures. The default value is 6.net.trainParam.min_grad
— Minimum performance gradient. The default value is 1e-7.net.trainParam.mu
— Initial mu. The default value is 0.001.net.trainParam.mu_dec
— Decrease factor for mu. The default value is 0.1.net.trainParam.mu_inc
— Increase factor for mu. The default value is 10.net.trainParam.mu_max
— Maximum value for mu. The default value is 1e10.net.trainParam.show
— Epochs between displays (NaN for no displays). The default value is 25.net.trainParam.showCommandLine
— Generate command-line output. The default value is false.net.trainParam.showWindow
— Show training GUI. The default value is true.net.trainParam.time
— Maximum time to train in seconds. The default value is inf.
Validation vectors are used to stop training early if the network performance on the validation vectors fails to improve or remains the same for max_fail epochs in a row. Test vectors are used as a further check that the network is generalizing well, but do not have any effect on training.
From Divide Data for Optimal Neural Network Training:MATLAB provides 4 built-in functions for splitting data:
- Divide the data randomly (default) -
dividerand
- Divide the data into contiguous blocks -
divideblock
- Divide the data using an interleaved selection -
divideint
- Divide the data by index -
divideind
You can access or change the division function for your network with this property:
net.divideFcn
Each of the division functions takes parameters that customize its behavior. These values are stored and can be changed with the following network property:
net.divideParam
QUESTION
I have implemented a 3D gaussian fit using scipy.optimize.leastsq
and now I would like to tweak the arguments ftol
and xtol
to optimize the performances. However, I don't understand the "units" of these two parameters in order to make a proper choice. Is it possible to calculate these two parameters from the results? That would give me an understanding of how to choose them. My data is numpy arrays of np.uint8
. I tried to read the FORTRAN source code of MINIPACK but my FORTRAN knowledge is zero. I also read checked the Levenberg-Marquardt algorithm, but I could not really get a number that was below the ftol
for example.
Here is a minimal example of what I do:
...ANSWER
Answered 2021-Mar-10 at 11:30Since you are giving a function without the gradient, the method called is lmdif. Instead of gradients it will use forward difference gradient estimate, f(x + delta) - f(x) ~ delta * df(x)/dx
(I will write as if the parameter).
There you find the following description
QUESTION
After a lot of research and experimentation I've learned a few things which have led me to re-frame the question. Rather than trying to find an "exponential regression", I'm really trying to optimize a non-linear error function with bounded input and potentially unbounded output.
So long as a function is linear, there exists a way to directly compute the optimal parameters to minimize the squared error terms (by taking the derivative of the function, locating the point where the derivative equals zero, then using this local minima as your solution). Many times when people say "exponential regression" they're referring to an equation of the form a * e^(b*x)
. Ths reason, described below, is that by taking the natural log of both sides this maps perfectly onto a linear equation and so can be directly computed in a single step using the same method.
However in my case the equation a * b^x
does not map onto a linear equation, and so there is no direct solution. Instead a solution must be determined iteratively.
There are a few non-linear curve fitting algorithms out there. Notably Levenberg-Marquardt. I found a handful of implementations of this algorithm:
- C++: https://www.gnu.org/software/gsl/doc/html/nls.html
- Python: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html
- JavaScript: https://github.com/mljs/levenberg-marquardt
Unfortunately I tried all three of these implementations and the curve fitting was just atrocious. I have some sample data with 11,000 points for which I know the optimal parameters are a = 0.08
and b = 1.19
, however these algorithms often returned bizarre results like a = 117, b = 0.000001
or a = 0, b = 3243224
Next I tried verifying my understanding of the problem by using Excel. An error function can be written defined as sum((y - y')^2)
where y'
is the estimated value given your parameters and an input x
. The problem then falls to minimizing this error function. I opened up my data (CSV), added a column for the "computed" values, added another column for the squared error terms, then finally used Solver to optimize the sum of the error terms. This worked beautifully! I got back a = 0.0796, b = 1.1897
. Plotting two lines on the same graph (original data and estimated data) showed a really good fit.
I tried doing the same using OpenOffice at first, however the solver built into OpenOffice was just as bad as the Levenberg-Marquardt experiments I did, and repeatedly gave worthless solutions. Even when I set initial values it would "optimize" the problem and come up with something far worse than it started.
Having proven my concept in Excel I then tried using optimization-js. I tried both their genetic optimization and powell optimization (because I don't have a gradient function) and in both cases it produced awful results.
I did find a question regarding how Excel's Solver works which linked to an ugly PDF. I haven't taken the time to read the PDF yet, but it may provide hints for solving the problem manually. I also found a Python example that reportedly implements Generalized Gradient Descent (the same algorithm as Excel), so if I can make sense of it and rewrite it to accept a generic function as input then I may be able to use that.
New Question (given all that):How, preferably in JavaScript (though other languages are acceptable so long as they can be run on AWS Lambda), can I optimize the parameters to the following function to minimize its output?
...ANSWER
Answered 2020-Oct-04 at 05:29- My equation is differentiable, I just wasn't sure how to differentiate it. The error function is a sum so the partial derivatives are just the derivatives of the inside of the sum, which is computed using chain rule. This means any nonlinear optimizer algorithm that requires a gradient was available to me
- Many nonlinear optimizers have trouble when very small changes in inputs lead to massive changes in outputs, which can be the case with exponential functions. Tuning the damping parameters or convergence parameters can help with this
I was able to get a version of gradient descent to compute the same answer as Excel after some work, but it took 15 seconds to run (vs Excel ran Solver in ~2 seconds) -- clearly my implementation was bad
More ImportantlySee: https://math.stackexchange.com/a/3850781/209313
There is no meaningful difference between e^(bx)
and b^x
. Because b^x == e^(log(b)*x)
. So we can use a linear regression model and then compute b
by taking e
to the power of whatever the model spits out.
Using regression-js:
QUESTION
Some weeks ago I started coding the Levenberg-Marquardt algorithm from scratch in Matlab. I'm interested in the polynomial fitting of the data but I haven't been able to achieve the level of accuracy I would like. I'm using a fifth order polynomial after I tried other polynomials and it seemed to be the best option. The algorithm always converges to the same function minimization no matter what improvements I try to implement. So far, I have unsuccessfully added the following features:
- Geodesic acceleration term as a second order correction
- Delayed gratification for updating the damping parameter
- Gain factor to get closer to the Gauss-Newton direction or the steepest descent direction depending on the iteration.
- Central differences and forward differences for the finite difference method
I don't have experience in nonlinear least squares, so I don't know if there is a way to minimize the residual even more or if there isn't more room for improvement with this method. I attach below an image of the behavior of the polynomial for the last iterations. If I run the code for more iterations, the curve ends up not changing from iteration to iteration. As it is observed, there is a good fit from time = 0 to time = 12. But I'm not able to fix the behavior of the function from time = 12 to time = 20. Any help will be very appreciated.
...ANSWER
Answered 2020-Jun-09 at 15:38Fitting a polynomial does not seem to be the best idea. Your data set looks like an exponential transient, with an horizontal asymptote. Forcing a polynomial to that will work very poorly.
I'd rather try with a simple model, such as
QUESTION
I would like to minimize x
and y
in function f
using least square (Levenberg-Marquardt). In Python I can use lmfit
like follows
ANSWER
Answered 2020-Mar-11 at 14:53Does it have to be Levenberg-Marquardt? If not, you can get what you want using Optim.jl:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install levenberg-marquardt
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page