svgp | WIP - SVG Parser , DOM API provider and Serializer | Animation library
kandi X-RAY | svgp Summary
kandi X-RAY | svgp Summary
SVG Parser, DOM API provider and Serializer.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of svgp
svgp Key Features
svgp Examples and Code Snippets
Community Discussions
Trending Discussions on svgp
QUESTION
I am trying to follow the Multiclass classification in GPFlow (using v2.1.3) as described here:
https://gpflow.readthedocs.io/en/master/notebooks/advanced/multiclass_classification.html
The difference with the example is that the X vector is 10-dimensional and the number of classes to predict is 5. But there seems to be error in dimensionality when using the inducing variables. I changed the kernel and use dummy data for reproducability, just looking to get this code to run. I put the dimensions of the variables in case that is the issue. Any calculation of loss causes an error like:
...ANSWER
Answered 2021-Oct-05 at 09:09When running your example I get a slightly different bug, but the issue is in how you define lengthscales and variances. You write:
QUESTION
I wanted to use priors on hyper-parameters as in (https://gpflow.readthedocs.io/en/develop/notebooks/advanced/mcmc.html) but with an SVGP model.
Following the steps of example 1, I got an error when I run de run_chain_fn :
TypeError: maximum_log_likelihood_objective() missing 1 required positional argument: 'data'
Contrary to GPR or SGPMC, the data are not an attribut of the model, they are included as external parameter.
To avoid that problem I modified slightly SVGP class to include data as parameter (I don't care with mini-batching for now)
...ANSWER
Answered 2021-Aug-30 at 18:18SVGP
is a GPflow model for a variational approximation. Using MCMC on the q(u) distribution parameterised by q_mu
and q_sqrt
doesn't make sense (if you want to do MCMC on q(u) in a sparse approximation, use SGPMC
).
You can still put (hyper)priors on the hyperparameters in the SVGP
model; gradient-based optimisation will then lead to the maximum a-posteriori (MAP) point estimate (as opposed to pure maximum likelihood).
QUESTION
Suppose I have a trained model
...ANSWER
Answered 2021-Apr-20 at 15:51Yes, the SVGP (as well as VGP) model predictions crucially depend on the q(u) distribution parametrised by model.q_mu
and model.q_sqrt
. You can transfer all parameters (including those two) using
QUESTION
I am supplying different minibatches to optimize a GPflow model (SVGP). If I decorate the optimization_step
with tf.function
I get the following error:
NotImplementedError: Cannot convert a symbolic Tensor (concat:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
In order for the optimizer to run I had to remove the tf.function
decorator, losing the speed-up advantages. What do I need to change so that I can keep using the tf.function
decorator?
The xAndY
input shapes and types are all numpy
arrays.
ANSWER
Answered 2021-Feb-14 at 18:01GPflow's gpflow.optimizers.Scipy()
is a wrapper around Scipy's minimize(), and as it calls into non-TensorFlow operations, you cannot wrap it in tf.function
. Moreover, the optimizers implemented in Scipy's minimize are second-order methods that assume that your gradients are not stochastic, and aren't compatible with minibatching.
If you want to do full-batch optimization with Scipy: The minimize()
method of gpflow.optimizers.Scipy()
, by default, does wrap the objective and gradient computation inside tf.function
(see its compile
argument with default True
). It also does the full optimization, so you only have to call the minimize()
method once (by default it runs until convergence or failure to continue optimization; you can supply a maximum number of iterations using the options=dict(maxiter=1000)
argument).
If you want to use mini-batching: simply use one of the TensorFlow optimizers, such as tf.optimizers.Adam()
, and then your code should run fine including the @tf.function
decorator on your optimization_step()
function (and in that case you do need to call it in a loop as in your example).
QUESTION
I need to position markers (circles) in the center of rooms on a SVG floor plan. Each room could be either a single shape - like a rectangle - or a group of shapes. The circle should cover about 1/4 of the room: radius = (width + height) / 8
The svg needs to resize to its div container.
After a lot of trial and error, I have come up with code that seems to work, but also looks overly complicated as I use getBBox(), getBoundingClientRect(), and matrixTransform():
- getBBox gives me the correct width and height, but x = y = 0 for group elements
- getBoundingClientRect + matrixTransform gives me the correct cx and cy coordinates, but not the correct room width and height.
Is there a simpler way?
...ANSWER
Answered 2020-Dec-15 at 08:11Here's some code that's a bit simpler.
QUESTION
In GPflow I have multiple time series and the sampling times are not aligned across time series, and the time series may have different length (longitudinal data). I assume that they are independent realizations from the same GP. What is the right way to handle this with svgp, and more generally with GPflow? Do i need to use coregionalization? The coregionalization notebook assumed correlated trajectories, while I want shared mean/kernel but independent.
...ANSWER
Answered 2020-Aug-11 at 17:57Yes, the Coregion
kernel implemented in GPflow is what you can use for your problem.
Let's set up some data from the generative model you describe, with different lengths for the timeseries:
QUESTION
I have built some gaussian process models in GPflow and learned them successfully, but I cannot find APIs that can help me to make inferences straightforwardly in GPflow, such as seperating the contributions of different kernels in a GPR model.
I know that I can do it manually, like calculating the covariance matrices, inverse and multiply, but such work can be quite annoying as the model gets more complex, like a multi-output SVGP model. Any suggestions?
Thanks in advance!
...ANSWER
Answered 2020-May-27 at 15:17If you want to e.g. decompose an additive Kernel, I think the easiest way for vanilla GPR would be to just switch out the Kernel to the part you're interested in, while still keeping the learned hyperparameters.
I'm not totally sure about it, but I think it could also work out for SVGP, since the approximation itself is just a standard GP using the same kernel but conditioned on the Inducing Points.
However, I'm not sure if the decomposition of the Variational approximation can be assumed to be close to the decomposition of the true posterior.
QUESTION
In the document of GPflow like SVGP and natural gradient, the Adam optimizer in TensorFlow is used when it comes to training model parameters (lengthscale, variance, inducing inputs, etc) of the GP model using stochastic variational inference technique, while the natural gradient optimizer for variational parameters. A snippet looks as follows
...ANSWER
Answered 2020-May-26 at 00:05You're right, and that's by design. A constrained variable in GPflow is represented by a Parameter
. The Parameter
wraps the unconstrained_variable
. When you access .trainable_variables
on your model, this will include the unconstrained_variable
of the Parameter
, and so when you pass these variables to the optimizer, the optimizer will train those rather than the Parameter
itself.
But your model doesn't see the unconstrained_value
, it sees the Parameter
interface which is a tf.Tensor
-like interface related to the wrapped unconstrained_variable
via an optional transformation. This transformation maps the unconstrained value to a constrained value. As such, your model will only see the constrained value. It's not a problem that your constrained value must be positive, the transform will map negative values of the unconstrained values to positive values for the constrained value.
You can see the unconstrained and constrained values of the first Parameter
for your kernel, as well as the transform that relates them, with
QUESTION
As far as the SGPMC paper[1] goes, the pretraining should be pretty much identical to SVGP.
However, the implementations (current dev version) differ a bit, and I'm having some problems understanding everything (especially what happens with the conditionals with q_sqrt=None
) due to the dispatch programming style.
Do I see it correctly, that the difference is that q_mu
/q_var
are now represented by that self.V
normal distribution? And the only other change would be that whitening is on per default because it's required for the sampling?
The odd thing is that stochastic optimization (without any sampling yet) of SPGMC seems to work quite a bit better on my specific data than with the SVGP class, which got me a bit confused, since it should basically be the same.
[1]Hensman, James, et al. "MCMC for variationally sparse Gaussian processes." Advances in Neural Information Processing Systems. 2015.
Edit2:
In the current dev branch I see that the (negative) training_objective consists basically of:
VariationalExp + self.log_prior_density()
,
whereas the SVGP ELBO would be VariationalExp - KL(q(u)|p(u))
.
self.log_prior_density()
apparently adds all the prior densities.
So the training objective looks like equation (7) of the SGPMC paper (the whitened optimal variational distribution).
So by optimizing the optimal variational approximation to the posterior p(f*,f, u, θ | y)
, we would be getting the MAP estimation of inducing points?
ANSWER
Answered 2020-Apr-29 at 16:24There are several elements to your question, I'll try and address them separately:
SVGP vs SGPMC objective:
In SVGP, we parametrize a closed-form posterior distribution q(u) by defining it as a normal (Gaussian) distribution with mean q_mu
and covariance q_sqrt @ q_sqrt.T
. In SGPMC, the distribution q(u) is implicitly represented by samples - V
holds a single sample at a time. In SVGP, the ELBO has a KL term that pulls q_mu and q_sqrt towards q(u) = p(u) = N(0, Kuu) (with whitening, q_mu and q_sqrt parametrize q(v), the KL term is driving them towards q(v) = p(v) = N(0, I), and u = chol(Kuu) v). In SGPMC, the same effect comes from the prior on V in the MCMC sampling. This is still reflected when doing MAP optimisation with a stochastic optimizer, but different from the KL term. You can set q_sqrt to zero and non-trainable for the SVGP model, but they still have slightly different objectives. Stochastic optimization in the SGPMC model might give you a better data fit, but this is not a variational optimization so you might be overfitting to your training data.
training_loss
:
For all GPflow models, model.training_loss
includes the log_prior_density
. (Just by default the SVGP model parameters do not have any priors set.) The SGPMC training_loss() corresponds to the negative of eq. (7) in the SGPMC paper [1].
Inducing points: By default the inducing points Z do not have a prior, so it would just be maximum likelihood. Note that [1] suggests to keep Z fixed in the SGPMC model (and base it on the optimsed locations in a previously-fit SVGP model).
What happens in conditional()
with q_sqrt=None
:
conditional()
computes the posterior distribution of f(Xnew) given the distribution of u; this handles both the case used in (S)VGP, where we have a variational distribution q(u) = N(q_mu, q_sqrt q_sqrt^T), and the noise-free case where "u is known" which is used in (S)GPMC. q_sqrt=None
is equivalent to saying "the variance is zero", like a delta spike on the mean, but saving computation.
QUESTION
in the current notebook tutorials (gpflow 2.0), all @tf.function tags include the option autograph=False, e.g. (https://gpflow.readthedocs.io/en/2.0.0-rc1/notebooks/advanced/gps_for_big_data.html):
...ANSWER
Answered 2020-Feb-18 at 17:37The reason we set autograph
to False
in most of the tf.function
wrapped objectives is because GPflow makes use a multi-dispatch Dispatcher which internally uses generators. TensorFlow, however, can not deal with generator objects in autograph mode (see Capabilities and Limitations of AutoGraph), which leads to these warning:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install svgp
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page