py-optim | Gradient-based optimization algorithms in Python | Computer Vision library

 by   schaul Python Version: Current License: No License

kandi X-RAY | py-optim Summary

kandi X-RAY | py-optim Summary

py-optim is a Python library typically used in Artificial Intelligence, Computer Vision applications. py-optim has no bugs, it has no vulnerabilities and it has low support. However py-optim build file is not available. You can download it from GitHub.

A collection of (stochastic) gradient descent algorithms with a unified interface.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              py-optim has a low active ecosystem.
              It has 51 star(s) with 19 fork(s). There are 5 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 1 open issues and 0 have been closed. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of py-optim is current.

            kandi-Quality Quality

              py-optim has 0 bugs and 0 code smells.

            kandi-Security Security

              py-optim has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              py-optim code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              py-optim does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              py-optim releases are not available. You will need to build from source code and install.
              py-optim has no build file. You will be need to create the build yourself to build the component from source.
              py-optim saves you 606 person hours of effort in developing the same functionality from scratch.
              It has 1412 lines of code, 175 functions and 31 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed py-optim and discovered the below as its top functions. This is intended to give you an instant insight into py-optim implemented functionality, and help decide if they suit your requirements.
            • Provide a new sampling
            • Return a new Sample
            • Provide the estimator
            • Get the index of the dataset
            • Calculate the current loss function
            • Forward backwardward
            • Reset the dataset
            • Calculate loss function
            • Calculate the gradient of the current gradients
            • Returns the gradient of the gradients
            • Provide samples
            • Registers oracle
            • Calculate the noise level
            • Runs a random jump
            • Initialize the parametric initialization
            • Initialize self acc_grad_var
            • Compute the statistics
            • Compute statistics
            • Returns the current gradient of the current gradient function
            • Calculates the current loss function
            Get all kandi verified functions for this library.

            py-optim Key Features

            No Key Features are available at this moment for py-optim.

            py-optim Examples and Code Snippets

            No Code Snippets are available at this moment for py-optim.

            Community Discussions

            QUESTION

            Scipy curve fit (optimization) - vectorizing a conditional to identify threshold using a custom function
            Asked 2021-May-09 at 18:40

            I'm trying to use scipy curve_fit to capture the value of a0 parameter. As of now, it is not changing (always comes out as 1):

            ...

            ANSWER

            Answered 2021-Apr-26 at 04:27

            Updated!

            This should work. note that the a variable is a vector in this example of length 3 because it is computed by the element wise multiplication of the first and second elements of X which is a 2x3 matrix. Therefore a0 can either be a scalar or a vector of length 3 and c can also be a scalar or a vector of length 3.

            Source https://stackoverflow.com/questions/67260288

            QUESTION

            Multivariate second order polynomial regression python
            Asked 2021-May-06 at 17:53

            I am dealing with multivariate regression problems. My dataset is something like X = (nsample, nx) and Y = (nsample, ny). nx and ny may vary based on different dataset of different case to study, so they should be general in the code.

            I would like to determine the coefficients for the multivariate polynomial regression minimizing the root mean square error. I thought to split the problem in ny different regressions, so for each of them my dataset is X = (nsample, nx) and Y = (nsample, 1). So, for each depended variable (Uj) the second order polynomial has the following form:

            I coded the function in python as:

            ...

            ANSWER

            Answered 2021-Apr-14 at 22:30

            Minimizing error is a huge, complex problem. As such, a lot of very clever people have thought up a lot of cool solutions. Here are a few:

            (out of all of them, I think bayesian optimization with sklearn might be a good choice for your use case, though I've never used it)

            (also, delete the last "s" in the image url to see the full size)

            Random approaches:
            • genetic algorithms: formats your problem like chromosomes in a genome and "breeds" an optimal solution (a personal favorite of mine)

            • simulated anealing: formats your problem like hot metal being annealed, which attempts to move to a stable state while losing heat

            • random search: better than it sounds. randomly tests a verity of input variables.

            • Grid Search: Simple to implement, but often less effective than methods which employ true randomness (duplicate exploration along particular axis of interest. This strategy often wastes computational resources)

            A lot of these come up in hyperparameter optimization for ML models.

            More Prescriptive Approaches:
            • Gradient Descent: uses the gradient calculated in a differentiable function to step toward local minima

            • scipy.optimize.minimize: I know you're already using this, but there are 15 different algorithms that can be used by changing the method flag.
            The rub

            while error minimization is simple conceptually, in practice complex error topologies in high dimensional spaces can be very difficult to traverse efficiently. It harkens to local and global extrema, the explore/exploit problem, and our mathematical understanding of what computational complexity even is. Often, a good error reduction is accomplished through a combination of thorough understanding of the problem, and experimentation with multiple algorithms and hyperparameters. In ML, this is often referred to as hyperparameter tuning, and is a sort of "meta" error reduction step, if you will.

            note: feel free to recommend more optimization methods, I'll add them to the list.

            Source https://stackoverflow.com/questions/67061395

            QUESTION

            scipy curve_fit with constraint and fixing points
            Asked 2021-Apr-23 at 07:59

            I'm trying to fit a function using SciPy's optimize.curve_fit to some scattered data, but I need the area under the fitted curve to be the same as that calculated based on the scattered data, and also that the curve passes through the initial and end points of the data. In order to do that, I am using the area (integral) defined by the scattered data in a penalization formulation as in this answer, while weighing the fit with the parameter sigma as proposed here.

            Unfortunately, I can't get my fit to pass through the initial and end points when including the integral constraint. If I disregard the integral constraint, the fit works fine and passes through the point. Is it not possible to satisfy both the integral and points constraint? I am using Python 3.7.10 on Windows 10.

            ...

            ANSWER

            Answered 2021-Apr-23 at 07:59

            Many thanks to Erwin Kalvelagen for the mind-opening comment on the question. I am posting my solution here:

            Source https://stackoverflow.com/questions/67162588

            QUESTION

            How to set up GEKKO for parameter estimation from multiple independent sets of data?
            Asked 2021-Jan-18 at 15:50

            I am learning how to use GEKKO for kinetic parameter estimation based on laboratory batch reactor data, which essentially consists of the concentration profiles of three species A, C, and P. For the purposes of my question, I am using a model that I previously featured in a question related to parameter estimation from a single data set.

            My ultimate goal is to be able to use multiple experimental runs for parameter estimation, leveraging data that may be collected at different temperatures, species concentrations, etc. Due to the independent nature of individual batch reactor experiments, each data set features samples collected at different time points. These different time points (and in the future, different temperatures for instance) are difficult for me to implement into a GEKKO model, as I previosly used the experimental data collection time points as the m.time parameter for the GEKKO model. (See end of post for code) I have solved problems like this in the past with gPROMS and Athena Visual Studio.

            To illustrate my problem, I generated an artificial data set of 'experimental' data from my original model by introducing noise to the species concentration profiles, and shifting the experimental time points slightly. I then combined all data sets of the same experimental species into new arrays featuring multiple columns. My thought process here was that GEKKO would carry out the parameter estimation by using the experimental data of each corresponding column of the arrays, so that times_comb[:,0] would be related to A_comb[:,0] while times_comb[:,1] would be related to A_comb[:,1].

            When I attempt to run the GEKKO model, the system does obtain a solution for the parameter estimation, but it is unclear to me if the problem solution is reasonable, as I notice that the GEKKO Variables A, B, C, and P are 34 element vectors, which is double the elements in each of the experimental data sets. I presume GEKKO is somehow combining both columns of the time and Parameter vectors during model setup that leads to those 34 element variables? I am also concerned that during this combination of the columns of each input parameter, that the relationship between a certain time point and the collected species information is lost.

            How could I improve the use of multiple data sets that GEKKO can simultaneously use for parameter estimation, with the consideration that the time points of each data set may be different? I looked on the GEKKO documentation examples as well as the APMonitor website, but I could not find examples featuring multiple data sets that I could use for guidance, as I am fairly new to the GEKKO package.

            Thank you for your time reading my question and for any help/ideas you may have.

            Code below:

            ...

            ANSWER

            Answered 2021-Jan-15 at 03:57

            To have multiple data sets with different times and data points, you can join the data sets as a pandas dataframe. Here is a simple example:

            Source https://stackoverflow.com/questions/65695486

            QUESTION

            Scipy Curve Fit: "Result from function call is not a proper array of floats."
            Asked 2021-Jan-06 at 00:41

            I am trying to fit a 2D Gaussian with an offset to a 2D array. The code is based on this thread here (which was written for Python2 while I am using Python3, therefore some changes were necessary to make it run somewhat):

            ...

            ANSWER

            Answered 2021-Jan-06 at 00:41

            data_array is (2, 2400, 2400) float64 (from added print)

            testmap is (2400, 2400) float64 (again a diagnostic print)

            curve_fit docs talk about M length or (k,M) arrays.

            You are providing (2,N,N) and (N,N) shape arrays.

            Lets try flattening the N,N dimensions:

            In the objective function:

            Source https://stackoverflow.com/questions/65587542

            QUESTION

            Optimisation of a numerical model with several data sets (scipy.minimize / scipy.optimise, pymoo or ??)
            Asked 2020-Oct-29 at 08:07

            So I have a problem and I'm a little bit lost at this point. So any input would be greatly appreciated, as I'm really struggeling right now ^^!

            I have a model I want to check / optimise using some experimental data I got.

            Generally speaking, my model takes two inputs (let's call say: time and temperature) and has 8 variables (x0-x7). The model generates two outputs (out1 and out2).

            Each set of my experimental data gives me 4 sets of information I can use for my optimisation: 2 inputs (time and temperature) and 2 experimental results (result1 and result2).

            Ultimately I want to minimize the difference between result1 & out1 and result2 & out2. So basically minimizing two residuals with several sets of data which are affected by 8 parameters which they all have in common (x0-x7).

            I have some bounds for the parameters x0-x7 which can help, but besides that no real constraints.

            So far I have tried using scipy.minimize with an iteration through my experimental result datasets like so (very schematic):

            ...

            ANSWER

            Answered 2020-Oct-29 at 08:07

            The basic idea of a shared object function is fine. I don't really go into details of the OP attempts, as this might be misleading. The process would be to define a proper residual function that can be used in a least square fit. There are several possibilities in Python to do that. I'll show scipy.optimize.leastsq and the closely related scipy.optimize.least_squares.

            Source https://stackoverflow.com/questions/64558862

            QUESTION

            Joint-fit using curve_fit with multiple equations and inputs - Is it even possible?
            Asked 2020-Sep-03 at 16:05

            I asked similar questions in January and April that @Miłosz Wieczór and @Joe were kind enough to show interest in. Now, I am facing a similar but different problem because I need to do a joint-fit with multiple equations and inputs in order to get the universal solutions for two parameters fc and alpha. My code (which is based on the answers from the previous questions) is as follows:

            ...

            ANSWER

            Answered 2020-Aug-25 at 14:05

            It might be easier to use least_squares directly. It takes a vector of residuals, where you simply specify the l.h.s. of your two equations

            Source https://stackoverflow.com/questions/63577841

            QUESTION

            How to use scipy.optimize.fmin with a vector instead of a scalar
            Asked 2020-Jun-12 at 11:03

            When using Scipy's fmin function, I keep encountering the error message: ValueError: setting an array element with a sequence I have seen that this question has been asked already some times, and I have read interesting posts such as:

            ..and have tried implementing the suggested solutions, such as adding '*args' to the cost function, appending the variables in the cost function to a list and vectorizing the variables. But nothing has worked for me so far.

            I am quite new to programming in Python, so it is possible that I have read the solution and not known how to apply it.

            A simplified version of the code, which I used to try to find the problem, is as follows:

            ...

            ANSWER

            Answered 2020-Jun-12 at 11:03

            As noted in the commments, your function must return a single value. Assuming that you want to perform a classic least squares fit, you could modify func to return just that:

            Source https://stackoverflow.com/questions/62324174

            QUESTION

            Scipy Optimize CurveFit calculates wrong values
            Asked 2020-Apr-15 at 08:50

            I am interesting in knowing the phase shift between two sine-waves type. For that I am trying to fit each wave with scipy.cuve_fit. I have been following this post. However I obtain negative amplitudes and the phase shift looks like forwarded pi radians sometimes.

            The code that I am using is that one below:

            ...

            ANSWER

            Answered 2020-Apr-15 at 08:50

            There are a few issues here that I do not understand:

            1. There is no need to define the fit function inside the "fit function"
            2. There is no need to define it twice if the only difference is the naming of the dictionary. (While I do not understand why this has to be named differently in the first place)
            3. One could directly fit the frequency instead of omega
            4. When pre-calculating the fitted values, directly use the given fitfunction

            Overall I don't see why the second fit should fail and using some generic data here, it doesn't. Considering the fact that in physics an amplitude can be complex I don't have a problem with a negative results. Nevertheless, I understand the point in the OP. Surely, a fit algorithm does not know about physics and, mathematically, there is no problem with the amplitude being negative. This just gives an additional phase shift of pi. Hence, one can easily force positive amplitudes when taking care of the required phase shift. I introduced this here as possible keyword argument. Moreover I reduced this to one fit function with possible "renaming" of the output dictionary keys as keyword argument.

            Source https://stackoverflow.com/questions/61168646

            QUESTION

            Bisect with discontinuous monotonous function: Find root using bisection for weakly monotonous function allowing for jumps
            Asked 2020-Mar-22 at 21:02

            I'm looking for a Python algorithm to find the root of a function f(x) using bisection, equivalent to scipy.optimize.bisect, but allowing for discontinuities (jumps) in f. The function f is weakly monotonous.

            It would be nice but not necessary for the algorithm to flag if the crossing (root) is directly 'at' a jump, and in this case to return the exact value x at which the relevant jump occurs (i.e. say the x for which sign(f(x-e)) != sign(f(x+e)) and abs(f(x-e)-f(x+e)>a for infinitesimal e>0 and non-infinitesimal a>0). It is also okay if instead the algorithm, for example, simply returns an x within a certain tolerance in this case.

            As the function is only weakly monotonous, it can have flat areas, and theoretically these can occur 'at' the root, i.e. where f=0: f(x)=0 for an entire range, x in [x_0,x_1]. In this case again, nice but not necessary for the algo to flag this particularity, and to, say, ensure an x from the range [x_0,x_1] is returned.

            ...

            ANSWER

            Answered 2020-Mar-22 at 21:02

            As long as you supply (possibly very small) strictly positive positives for xtol and rtol, the function will work with discontinuities:

            Source https://stackoverflow.com/questions/60804523

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install py-optim

            You can download it from GitHub.
            You can use py-optim like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/schaul/py-optim.git

          • CLI

            gh repo clone schaul/py-optim

          • sshUrl

            git@github.com:schaul/py-optim.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link