py-optim | Gradient-based optimization algorithms in Python | Computer Vision library
kandi X-RAY | py-optim Summary
kandi X-RAY | py-optim Summary
A collection of (stochastic) gradient descent algorithms with a unified interface.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Provide a new sampling
- Return a new Sample
- Provide the estimator
- Get the index of the dataset
- Calculate the current loss function
- Forward backwardward
- Reset the dataset
- Calculate loss function
- Calculate the gradient of the current gradients
- Returns the gradient of the gradients
- Provide samples
- Registers oracle
- Calculate the noise level
- Runs a random jump
- Initialize the parametric initialization
- Initialize self acc_grad_var
- Compute the statistics
- Compute statistics
- Returns the current gradient of the current gradient function
- Calculates the current loss function
py-optim Key Features
py-optim Examples and Code Snippets
Community Discussions
Trending Discussions on py-optim
QUESTION
I'm trying to use scipy curve_fit
to capture the value of a0
parameter. As of now, it is not changing (always comes out as 1):
ANSWER
Answered 2021-Apr-26 at 04:27Updated!
This should work. note that the a
variable is a vector in this example of length 3 because it is computed by the element wise multiplication of the first and second elements of X
which is a 2x3 matrix. Therefore a0
can either be a scalar or a vector of length 3 and c
can also be a scalar or a vector of length 3.
QUESTION
I am dealing with multivariate regression problems. My dataset is something like X = (nsample, nx) and Y = (nsample, ny). nx and ny may vary based on different dataset of different case to study, so they should be general in the code.
I would like to determine the coefficients for the multivariate polynomial regression minimizing the root mean square error. I thought to split the problem in ny different regressions, so for each of them my dataset is X = (nsample, nx) and Y = (nsample, 1). So, for each depended variable (Uj) the second order polynomial has the following form:
I coded the function in python as:
...ANSWER
Answered 2021-Apr-14 at 22:30Minimizing error is a huge, complex problem. As such, a lot of very clever people have thought up a lot of cool solutions. Here are a few:
(out of all of them, I think bayesian optimization with sklearn might be a good choice for your use case, though I've never used it)
(also, delete the last "s" in the image url to see the full size)
Random approaches:- genetic algorithms: formats your problem like chromosomes in a genome and "breeds" an optimal solution (a personal favorite of mine)
- simulated anealing: formats your problem like hot metal being annealed, which attempts to move to a stable state while losing heat
- random search: better than it sounds. randomly tests a verity of input variables.
- Grid Search: Simple to implement, but often less effective than methods which employ true randomness (duplicate exploration along particular axis of interest. This strategy often wastes computational resources)
A lot of these come up in hyperparameter optimization for ML models.
More Prescriptive Approaches:- Gradient Descent: uses the gradient calculated in a differentiable function to step toward local minima
- DeepAR: uses Bayesian optimization, combined with random search, to reduce loss in hyperparameter tuning. While I believe this is only available on AWS, It looks like sklearn has an implementation of Bayesian optimization
- scipy.optimize.minimize: I know you're already using this, but there are 15 different algorithms that can be used by changing the
method
flag.
while error minimization is simple conceptually, in practice complex error topologies in high dimensional spaces can be very difficult to traverse efficiently. It harkens to local and global extrema, the explore/exploit problem, and our mathematical understanding of what computational complexity even is. Often, a good error reduction is accomplished through a combination of thorough understanding of the problem, and experimentation with multiple algorithms and hyperparameters. In ML, this is often referred to as hyperparameter tuning, and is a sort of "meta" error reduction step, if you will.
note: feel free to recommend more optimization methods, I'll add them to the list.
QUESTION
I'm trying to fit a function using SciPy's optimize.curve_fit
to some scattered data, but I need the area under the fitted curve to be the same as that calculated based on the scattered data, and also that the curve passes through the initial and end points of the data. In order to do that, I am using the area (integral) defined by the scattered data in a penalization formulation as in this answer, while weighing the fit with the parameter sigma
as proposed here.
Unfortunately, I can't get my fit to pass through the initial and end points when including the integral constraint. If I disregard the integral constraint, the fit works fine and passes through the point. Is it not possible to satisfy both the integral and points constraint? I am using Python 3.7.10 on Windows 10.
...ANSWER
Answered 2021-Apr-23 at 07:59Many thanks to Erwin Kalvelagen for the mind-opening comment on the question. I am posting my solution here:
QUESTION
I am learning how to use GEKKO for kinetic parameter estimation based on laboratory batch reactor data, which essentially consists of the concentration profiles of three species A, C, and P. For the purposes of my question, I am using a model that I previously featured in a question related to parameter estimation from a single data set.
My ultimate goal is to be able to use multiple experimental runs for parameter estimation, leveraging data that may be collected at different temperatures, species concentrations, etc. Due to the independent nature of individual batch reactor experiments, each data set features samples collected at different time points. These different time points (and in the future, different temperatures for instance) are difficult for me to implement into a GEKKO model, as I previosly used the experimental data collection time points as the m.time parameter for the GEKKO model. (See end of post for code) I have solved problems like this in the past with gPROMS and Athena Visual Studio.
To illustrate my problem, I generated an artificial data set of 'experimental' data from my original model by introducing noise to the species concentration profiles, and shifting the experimental time points slightly. I then combined all data sets of the same experimental species into new arrays featuring multiple columns. My thought process here was that GEKKO would carry out the parameter estimation by using the experimental data of each corresponding column of the arrays, so that times_comb[:,0]
would be related to A_comb[:,0]
while times_comb[:,1]
would be related to A_comb[:,1]
.
When I attempt to run the GEKKO model, the system does obtain a solution for the parameter estimation, but it is unclear to me if the problem solution is reasonable, as I notice that the GEKKO Variables A, B, C, and P are 34 element vectors, which is double the elements in each of the experimental data sets. I presume GEKKO is somehow combining both columns of the time and Parameter vectors during model setup that leads to those 34 element variables? I am also concerned that during this combination of the columns of each input parameter, that the relationship between a certain time point and the collected species information is lost.
How could I improve the use of multiple data sets that GEKKO can simultaneously use for parameter estimation, with the consideration that the time points of each data set may be different? I looked on the GEKKO documentation examples as well as the APMonitor website, but I could not find examples featuring multiple data sets that I could use for guidance, as I am fairly new to the GEKKO package.
Thank you for your time reading my question and for any help/ideas you may have.
Code below:
...ANSWER
Answered 2021-Jan-15 at 03:57To have multiple data sets with different times and data points, you can join the data sets as a pandas
dataframe. Here is a simple example:
QUESTION
I am trying to fit a 2D Gaussian with an offset to a 2D array. The code is based on this thread here (which was written for Python2 while I am using Python3, therefore some changes were necessary to make it run somewhat):
...ANSWER
Answered 2021-Jan-06 at 00:41data_array
is (2, 2400, 2400)
float64
(from added print)
testmap
is (2400, 2400)
float64
(again a diagnostic print)
curve_fit
docs talk about M length or (k,M) arrays.
You are providing (2,N,N) and (N,N) shape arrays.
Lets try flattening the N,N dimensions:
In the objective function:
QUESTION
So I have a problem and I'm a little bit lost at this point. So any input would be greatly appreciated, as I'm really struggeling right now ^^!
I have a model I want to check / optimise using some experimental data I got.
Generally speaking, my model takes two inputs (let's call say: time and temperature) and has 8 variables (x0-x7). The model generates two outputs (out1 and out2).
Each set of my experimental data gives me 4 sets of information I can use for my optimisation: 2 inputs (time and temperature) and 2 experimental results (result1 and result2).
Ultimately I want to minimize the difference between result1 & out1 and result2 & out2. So basically minimizing two residuals with several sets of data which are affected by 8 parameters which they all have in common (x0-x7).
I have some bounds for the parameters x0-x7 which can help, but besides that no real constraints.
So far I have tried using scipy.minimize with an iteration through my experimental result datasets like so (very schematic):
...ANSWER
Answered 2020-Oct-29 at 08:07The basic idea of a shared object function is fine. I don't really go into details of the OP attempts, as this might be misleading. The process would be to define a proper residual function that can be used in a least square fit. There are several possibilities in Python to do that. I'll show scipy.optimize.leastsq
and the closely related scipy.optimize.least_squares
.
QUESTION
I asked similar questions in January and April that @Miłosz Wieczór and @Joe were kind enough to show interest in. Now, I am facing a similar but different problem because I need to do a joint-fit with multiple equations and inputs in order to get the universal solutions for two parameters fc
and alpha
. My code (which is based on the answers from the previous questions) is as follows:
ANSWER
Answered 2020-Aug-25 at 14:05It might be easier to use least_squares directly. It takes a vector of residuals, where you simply specify the l.h.s. of your two equations
QUESTION
When using Scipy's fmin function, I keep encountering the error message: ValueError: setting an array element with a sequence I have seen that this question has been asked already some times, and I have read interesting posts such as:
- ValueError: setting an array element with a sequence
- Scipy optimize fmin ValueError: setting an array element with a sequence
- Scipy minimize fmin - problems with syntax
..and have tried implementing the suggested solutions, such as adding '*args' to the cost function, appending the variables in the cost function to a list and vectorizing the variables. But nothing has worked for me so far.
I am quite new to programming in Python, so it is possible that I have read the solution and not known how to apply it.
A simplified version of the code, which I used to try to find the problem, is as follows:
...ANSWER
Answered 2020-Jun-12 at 11:03As noted in the commments, your function must return a single value. Assuming that you want to perform a classic least squares fit, you could modify func
to return just that:
QUESTION
I am interesting in knowing the phase shift between two sine-waves type. For that I am trying to fit each wave with scipy.cuve_fit. I have been following this post. However I obtain negative amplitudes and the phase shift looks like forwarded pi radians sometimes.
The code that I am using is that one below:
...ANSWER
Answered 2020-Apr-15 at 08:50There are a few issues here that I do not understand:
- There is no need to define the fit function inside the "fit function"
- There is no need to define it twice if the only difference is the naming of the dictionary. (While I do not understand why this has to be named differently in the first place)
- One could directly fit the frequency instead of omega
- When pre-calculating the fitted values, directly use the given fitfunction
Overall I don't see why the second fit should fail and using some generic data here, it doesn't. Considering the fact that in physics an amplitude can be complex I don't have a problem with a negative results. Nevertheless, I understand the point in the OP. Surely, a fit algorithm does not know about physics and, mathematically, there is no problem with the amplitude being negative. This just gives an additional phase shift of pi. Hence, one can easily force positive amplitudes when taking care of the required phase shift. I introduced this here as possible keyword argument. Moreover I reduced this to one fit function with possible "renaming" of the output dictionary keys as keyword argument.
QUESTION
I'm looking for a Python algorithm to find the root of a function f(x)
using bisection, equivalent to scipy.optimize.bisect, but allowing for discontinuities (jumps) in f
. The function f
is weakly monotonous.
It would be nice but not necessary for the algorithm to flag if the crossing (root) is directly 'at' a jump, and in this case to return the exact value x
at which the relevant jump occurs (i.e. say the x
for which sign(f(x-e)) != sign(f(x+e))
and abs(f(x-e)-f(x+e)>a
for infinitesimal e>0
and non-infinitesimal a>0
). It is also okay if instead the algorithm, for example, simply returns an x
within a certain tolerance in this case.
As the function is only weakly monotonous, it can have flat areas, and theoretically these can occur 'at' the root, i.e. where f=0
: f(x)=0 for an entire range, x in [x_0,x_1]
. In this case again, nice but not necessary for the algo to flag this particularity, and to, say, ensure an x
from the range [x_0,x_1]
is returned.
ANSWER
Answered 2020-Mar-22 at 21:02As long as you supply (possibly very small) strictly positive positives for xtol
and rtol
, the function will work with discontinuities:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install py-optim
You can use py-optim like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page