levenberg-marquardt | Curve fitting method in JavaScript

 by   mljs JavaScript Version: v4.1.0 License: MIT

kandi X-RAY | levenberg-marquardt Summary

kandi X-RAY | levenberg-marquardt Summary

levenberg-marquardt is a JavaScript library. levenberg-marquardt has no vulnerabilities, it has a Permissive License and it has low support. However levenberg-marquardt has 1 bugs. You can install using 'npm i ml-levenberg-marquardt' or download it from GitHub, npm.

Curve fitting method in JavaScript
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              levenberg-marquardt has a low active ecosystem.
              It has 65 star(s) with 14 fork(s). There are 17 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 14 open issues and 16 have been closed. On average issues are closed in 65 days. There are 2 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of levenberg-marquardt is v4.1.0

            kandi-Quality Quality

              levenberg-marquardt has 1 bugs (0 blocker, 0 critical, 0 major, 1 minor) and 2 code smells.

            kandi-Security Security

              levenberg-marquardt has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              levenberg-marquardt code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              levenberg-marquardt is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              levenberg-marquardt releases are available to install and integrate.
              Deployable package is available in npm.
              Installation instructions, examples and code snippets are available.
              It has 877 lines of code, 0 functions and 20 files.
              It has low code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed levenberg-marquardt and discovered the below as its top functions. This is intended to give you an instant insight into levenberg-marquardt implemented functionality, and help decide if they suit your requirements.
            • Run the Levenshtein algorithm .
            • Matrix function .
            Get all kandi verified functions for this library.

            levenberg-marquardt Key Features

            No Key Features are available at this moment for levenberg-marquardt.

            levenberg-marquardt Examples and Code Snippets

            No Code Snippets are available at this moment for levenberg-marquardt.

            Community Discussions

            QUESTION

            How does gtol parameter work in scipy.optimize.curve_fit
            Asked 2022-Jan-03 at 12:12

            I am trying to fit a Gaussian model onto gaussian distributed data (x,y) , using scipy's curve_fit. I am trying to tweak the parameters of the fitting, in order to get better fitting. I saw that curve_fit calls scipy.optimize.least_sq with the method LM (Levenberg-Marquardt method). It seems to me that it constructs a function that evaluates the least square criterion at each data point. In my example, I have 8 data points. In my comprehension and according to scipy's documentation gtol is "Orthogonality desired between the function vector and the columns of the Jacobian."

            ...

            ANSWER

            Answered 2021-Dec-23 at 11:46

            From scipy/optimize/minpack/lmder.f, we find a more detailed description

            Source https://stackoverflow.com/questions/70461080

            QUESTION

            Fixing parameters of a fitting function in Nonlinear Least-Square GSL
            Asked 2021-Sep-29 at 16:32

            I'm working on some code that I'm writing which uses the [GNU Scientific Library (GSL)][1]'s Nonlinear least-squares algorithm for curve fitting.

            I have been successful in obtaining a working code that estimate the right parameters from the fitting analysis using a C++ wrapper from https://github.com/Eleobert/gsl-curve-fit/blob/master/example.cpp.

            Now, I would like to fix some of the parameters of the function to be fit. And I would like to modify the function in such a way that I can already input the value of the parameter to be fixed.

            Any idea on how to do? I'm showing here the full code.

            This is the code for performing nonlinear least-squares fitting:

            ...

            ANSWER

            Answered 2021-Sep-29 at 16:32

            Ok. Here's the answer based on the code linked in http://github.com/Eleobert/gsl-curve-fit/blob/master/example.cpp. However, this is not the code posted in the question: you should update your question accordingly so that others may take advantage from both the question & answer.

            So, basically, the main problem is that GSL is a library written in pure C, whereas you use a high-level wrapper written in C++, published in the aforementioned link . While the wrapper is written pretty well in modern C++, it has one basic problem: it is "stiff" - it can be used only for a subclass of problems it was designed for, and this subclass is a rather narrow subset of the capabilities offered by the original C code.

            Let's try to improve it a bit and start from how the wrapper is supposed to be used:

            Source https://stackoverflow.com/questions/69226371

            QUESTION

            Scaling error and Information cirteria in lmfit
            Asked 2021-Jun-12 at 02:48

            I am using lmfit for solving a non-linear optimization problem. It works fine to the point where I am trying to implement a measurement error as the standard deviation of the dependent variable y (sigma_y). My problem is, that I cannot interpret the Information criteria properly. When implementing the return (model - y)/(sigma_y) they just raise from really low to very high values.

            i.e. [left: return (model - y) -weighting-> right: return (model - y)/(sigma_y)]:

            My guess is, that this is somehow connected to bad usage of lmfit (wrong calculation of Information criteria, bad error scaling) or to a general lack of understanding these criteria (to me reduced chi-square of 5.258 (under-estimated) or 1.776e-4 (over-estimated) sounds like a really poor fit, but the plot of residuals etc. looks okay for me...)

            Here is my example code that reproduces the problem:

            ...

            ANSWER

            Answered 2021-Jun-12 at 02:48

            Well, in order for the magnitude of chi-square to be meaningful (for example, that it be around (N_data - N_varys), the scale of the uncertainties has to be correct -- giving the 1-sigma standard deviation for each observation.

            It's not really possible for lmfit to detect whether the sigma you use has the right scale.

            Source https://stackoverflow.com/questions/67936669

            QUESTION

            Parameter boundaries using Eigen's Levenberg-Marquardt
            Asked 2021-Jun-04 at 08:42

            I'm using Eigen's Levenberg-Marquardt implementation and wondering how to set some boundaries on the parameters which should be optimized.

            As I'm migrating some GNU octave programs to Eigen I expected that there might be some boundaries which can be easily provided as parameters to the module.

            The layout of my implemenation is nearly the same as in this example. I'm not providing the df() implemenatation but rather use Eigen::NumericalDiff in order to approximate it.

            So how do I enforce some boundaries on the parameters which are supplied to minimize()? I thought about setting the errors(fvec) in the operator() to some high values when leaving my expected ranges, but in some small tests this resulted in strange results.

            ...

            ANSWER

            Answered 2021-Jun-04 at 08:42

            I found a solution which is at least working for me.

            The idea is to increase the error vector once the parameters are leaving their sanity boundaries.

            This can be achieved by the following function:

            Source https://stackoverflow.com/questions/67734419

            QUESTION

            What is the default settings (e.g. hyperparameters) for MatLab's feedforwardnet?
            Asked 2021-Apr-20 at 11:24

            My code is very simple for one layer of 20 neurons:

            ...

            ANSWER

            Answered 2021-Apr-20 at 11:24

            According to the documentations for feedforwardnet, the default setting for this function is to train with the Levenberg-Marquardt backpropagation, aka. damped least-squares -- feedforwardnet(20, 'trainlm') option.

            As for the data split, the default seems to be a random 0.7-0.15-0.15 train-validation-test split, using the dividerand function.

            From the trainlm page:

            trainlm is a network training function that updates weight and bias values according to Levenberg-Marquardt optimization. trainlm is often the fastest backpropagation algorithm in the toolbox, and is highly recommended as a first-choice supervised algorithm, although it does require more memory than other algorithms. Training occurs according to trainlm training parameters, shown here with their default values:

            • net.trainParam.epochs — Maximum number of epochs to train. The default value is 1000.
            • net.trainParam.goal — Performance goal. The default value is 0.
            • net.trainParam.max_fail — Maximum validation failures. The default value is 6.
            • net.trainParam.min_grad — Minimum performance gradient. The default value is 1e-7.
            • net.trainParam.mu — Initial mu. The default value is 0.001.
            • net.trainParam.mu_dec — Decrease factor for mu. The default value is 0.1.
            • net.trainParam.mu_inc — Increase factor for mu. The default value is 10.
            • net.trainParam.mu_max — Maximum value for mu. The default value is 1e10.
            • net.trainParam.show — Epochs between displays (NaN for no displays). The default value is 25.
            • net.trainParam.showCommandLine — Generate command-line output. The default value is false.
            • net.trainParam.showWindow — Show training GUI. The default value is true.
            • net.trainParam.time — Maximum time to train in seconds. The default value is inf.

            Validation vectors are used to stop training early if the network performance on the validation vectors fails to improve or remains the same for max_fail epochs in a row. Test vectors are used as a further check that the network is generalizing well, but do not have any effect on training.

            From Divide Data for Optimal Neural Network Training:

            MATLAB provides 4 built-in functions for splitting data:

            1. Divide the data randomly (default) - dividerand
            2. Divide the data into contiguous blocks - divideblock
            3. Divide the data using an interleaved selection - divideint
            4. Divide the data by index - divideind

            You can access or change the division function for your network with this property:

            net.divideFcn

            Each of the division functions takes parameters that customize its behavior. These values are stored and can be changed with the following network property:

            net.divideParam

            Source https://stackoverflow.com/questions/67176333

            QUESTION

            How to calculate "relative error in the sum of squares" and "relative error in the approximate solution" from least squares method?
            Asked 2021-Mar-10 at 11:30

            I have implemented a 3D gaussian fit using scipy.optimize.leastsq and now I would like to tweak the arguments ftol and xtol to optimize the performances. However, I don't understand the "units" of these two parameters in order to make a proper choice. Is it possible to calculate these two parameters from the results? That would give me an understanding of how to choose them. My data is numpy arrays of np.uint8. I tried to read the FORTRAN source code of MINIPACK but my FORTRAN knowledge is zero. I also read checked the Levenberg-Marquardt algorithm, but I could not really get a number that was below the ftol for example.

            Here is a minimal example of what I do:

            ...

            ANSWER

            Answered 2021-Mar-10 at 11:30

            Since you are giving a function without the gradient, the method called is lmdif. Instead of gradients it will use forward difference gradient estimate, f(x + delta) - f(x) ~ delta * df(x)/dx (I will write as if the parameter).

            There you find the following description

            Source https://stackoverflow.com/questions/66494932

            QUESTION

            Performing an exponential regression in JavaScript
            Asked 2020-Oct-04 at 05:29
            Important Update:

            After a lot of research and experimentation I've learned a few things which have led me to re-frame the question. Rather than trying to find an "exponential regression", I'm really trying to optimize a non-linear error function with bounded input and potentially unbounded output.

            So long as a function is linear, there exists a way to directly compute the optimal parameters to minimize the squared error terms (by taking the derivative of the function, locating the point where the derivative equals zero, then using this local minima as your solution). Many times when people say "exponential regression" they're referring to an equation of the form a * e^(b*x). Ths reason, described below, is that by taking the natural log of both sides this maps perfectly onto a linear equation and so can be directly computed in a single step using the same method.

            However in my case the equation a * b^x does not map onto a linear equation, and so there is no direct solution. Instead a solution must be determined iteratively.

            There are a few non-linear curve fitting algorithms out there. Notably Levenberg-Marquardt. I found a handful of implementations of this algorithm:

            Unfortunately I tried all three of these implementations and the curve fitting was just atrocious. I have some sample data with 11,000 points for which I know the optimal parameters are a = 0.08 and b = 1.19, however these algorithms often returned bizarre results like a = 117, b = 0.000001 or a = 0, b = 3243224

            Next I tried verifying my understanding of the problem by using Excel. An error function can be written defined as sum((y - y')^2) where y' is the estimated value given your parameters and an input x. The problem then falls to minimizing this error function. I opened up my data (CSV), added a column for the "computed" values, added another column for the squared error terms, then finally used Solver to optimize the sum of the error terms. This worked beautifully! I got back a = 0.0796, b = 1.1897. Plotting two lines on the same graph (original data and estimated data) showed a really good fit.

            I tried doing the same using OpenOffice at first, however the solver built into OpenOffice was just as bad as the Levenberg-Marquardt experiments I did, and repeatedly gave worthless solutions. Even when I set initial values it would "optimize" the problem and come up with something far worse than it started.

            Having proven my concept in Excel I then tried using optimization-js. I tried both their genetic optimization and powell optimization (because I don't have a gradient function) and in both cases it produced awful results.

            I did find a question regarding how Excel's Solver works which linked to an ugly PDF. I haven't taken the time to read the PDF yet, but it may provide hints for solving the problem manually. I also found a Python example that reportedly implements Generalized Gradient Descent (the same algorithm as Excel), so if I can make sense of it and rewrite it to accept a generic function as input then I may be able to use that.

            New Question (given all that):

            How, preferably in JavaScript (though other languages are acceptable so long as they can be run on AWS Lambda), can I optimize the parameters to the following function to minimize its output?

            ...

            ANSWER

            Answered 2020-Oct-04 at 05:29
            First and foremost:
            • My equation is differentiable, I just wasn't sure how to differentiate it. The error function is a sum so the partial derivatives are just the derivatives of the inside of the sum, which is computed using chain rule. This means any nonlinear optimizer algorithm that requires a gradient was available to me
            • Many nonlinear optimizers have trouble when very small changes in inputs lead to massive changes in outputs, which can be the case with exponential functions. Tuning the damping parameters or convergence parameters can help with this

            I was able to get a version of gradient descent to compute the same answer as Excel after some work, but it took 15 seconds to run (vs Excel ran Solver in ~2 seconds) -- clearly my implementation was bad

            More Importantly

            See: https://math.stackexchange.com/a/3850781/209313

            There is no meaningful difference between e^(bx) and b^x. Because b^x == e^(log(b)*x). So we can use a linear regression model and then compute b by taking e to the power of whatever the model spits out.

            Using regression-js:

            Source https://stackoverflow.com/questions/64141182

            QUESTION

            How to improve Levenberg-Marquardt's method for polynomial curve fitting?
            Asked 2020-Jun-26 at 10:41

            Some weeks ago I started coding the Levenberg-Marquardt algorithm from scratch in Matlab. I'm interested in the polynomial fitting of the data but I haven't been able to achieve the level of accuracy I would like. I'm using a fifth order polynomial after I tried other polynomials and it seemed to be the best option. The algorithm always converges to the same function minimization no matter what improvements I try to implement. So far, I have unsuccessfully added the following features:

            • Geodesic acceleration term as a second order correction
            • Delayed gratification for updating the damping parameter
            • Gain factor to get closer to the Gauss-Newton direction or the steepest descent direction depending on the iteration.
            • Central differences and forward differences for the finite difference method

            I don't have experience in nonlinear least squares, so I don't know if there is a way to minimize the residual even more or if there isn't more room for improvement with this method. I attach below an image of the behavior of the polynomial for the last iterations. If I run the code for more iterations, the curve ends up not changing from iteration to iteration. As it is observed, there is a good fit from time = 0 to time = 12. But I'm not able to fix the behavior of the function from time = 12 to time = 20. Any help will be very appreciated.

            ...

            ANSWER

            Answered 2020-Jun-09 at 15:38

            Fitting a polynomial does not seem to be the best idea. Your data set looks like an exponential transient, with an horizontal asymptote. Forcing a polynomial to that will work very poorly.

            I'd rather try with a simple model, such as

            Source https://stackoverflow.com/questions/62231658

            QUESTION

            Julia - Equivalent of Python lmfit
            Asked 2020-Mar-11 at 14:53

            I would like to minimize x and y in function f using least square (Levenberg-Marquardt). In Python I can use lmfit like follows

            ...

            ANSWER

            Answered 2020-Mar-11 at 14:53

            Does it have to be Levenberg-Marquardt? If not, you can get what you want using Optim.jl:

            Source https://stackoverflow.com/questions/60634803

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install levenberg-marquardt

            You can install using 'npm i ml-levenberg-marquardt' or download it from GitHub, npm.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries

            Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Consider Popular JavaScript Libraries

            freeCodeCamp

            by freeCodeCamp

            vue

            by vuejs

            react

            by facebook

            bootstrap

            by twbs

            Try Top Libraries by mljs

            ml

            by mljsJavaScript

            matrix

            by mljsJavaScript

            knn

            by mljsJavaScript

            pca

            by mljsJavaScript

            libsvm

            by mljsJavaScript