GPflow | Gaussian processes in TensorFlow | Analytics library
kandi X-RAY | GPflow Summary
kandi X-RAY | GPflow Summary
GPflow is a package for building Gaussian process models in Python. It implements modern Gaussian process inference for composable kernels and likelihoods. GPflow 2.1 builds on TensorFlow 2.2+ and TensorFlow Probability for running computations, which allows fast execution on GPUs. The online documentation (develop)/(master) contains more details.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Computes the gauss kernel of the Gaussian kernel
- Univariate conditional conditional
- Performs a conditional conditional with a knn size
- Recompute a conditional condition
- Performs the NVD gradients .
- Conditional intermediate Interdomain .
- Compute the ndiag quadrature
- Computes the expected eigenvectors
- Perform cglb .
- Returns the upper bound
GPflow Key Features
GPflow Examples and Code Snippets
.
├── BOCPDMPP_Slides.pdf # Summary of project in slides
├── BOCPDMPP_V2_0.pdf # Full thesis on BOCPDMPP
├── BVAR_NIG.py
├── BVAR_NIG_DPD.py
├── Evaluation_tool.py
├── LGCP_test.py
├── cp_probability_model.py
├── detector.
pip install tensorflow
git clone https://github.com/GPflow/GPflow.git
cd GPflow
pip install .
cd
git clone https://github.com/ManchesterBioinference/GrandPrix
cd GrandPrix
python setup.py install
cd
python 3.x (3.7 tested)
conda install tensorflow-gpu==1.15
pip install keras==2.3.1
pip install gpflow==1.5
pip install gpuinfo
lengthscales = [0.1]*num_classes
variances = [1.0]*num_classes
kernel = gpflow.kernels.Matern32(variance=variances, lengthscales=lengthscales)
lengthscales = [0.1]*num_of_independent_vars
kernel = gpflow.kernels.Ma
k = gpflow.kernels.ChangePoints([k_1, k_2], locations=[0.5], steepness=50.0,
switch_dim=1) # <-- This one!
pip install git+https://github.com/GPflow/GPflow.git@refs/pull/1671/head
params = gpflow.utilities.parameter_dict(model)
gpflow.utilities.multiple_assign(model, params)
import gpflow
import tensorflow as tf
class MyKernel(gpflow.kernels.SquaredExponential): # or whichever kernel you want
@property
def lengthscales(self) -> tf.Tensor:
return tf.stack([self.lengthscale_0, self.lengthsca
for _ in range(iter):
opt.minimize(model.training_loss, model.trainable_variables)
@tf.function
def optimization_step():
opt.minimize(model.training_loss, model.trainable_variables)
for _ in range(iter):
list_1 = [....]
list_2 = [....]
...
k = [-28, 32, ...]
X1 = np.arange(0, len(list_1))
Y1 = list_1
X2 = np.arange(0, len(list_2))
Y2 = list_2
...
k1 = gpflow.kernels.SquaredExponential(lengthscales = [0.1], active_dims=[0])
k2 = gpflow.kernels.SquaredExponential(lengthscales = [0.1], active_dims=[1])
m = gpflow.models.GPR(data=(X, Y), kernel=k1+k2)
m.likelihood.variance.assign(0
Community Discussions
Trending Discussions on GPflow
QUESTION
I believe there is an error in the next-to-last equation in this GPflow documentation page. I provide details here. Can this be right?
...ANSWER
Answered 2022-Feb-22 at 22:16There is a typo in the third-to-the-last equation in this GPflow documentation page, as show in this image, and further explained here. Using this corrected equation, my previous proof of the last equation in this GPflow documentation page greatly simplifies, as shown in this image, and further explained here.
QUESTION
I need to train a GPR model in multiple batches per epoch using a custom loss function. I would like to do this using GPflow and I would like to compile my training using tf.function
to increase the efficiency. However, gpflow.GPR
must be re-instantiated each time you supply new data, so tf.function
will have to re-trace each time. This makes the code slower rather than faster.
This is the initial setup:
...ANSWER
Answered 2021-Oct-27 at 15:11You don't have to re-instantiate GPR
each time. You can construct tf.Variable
holders with unconstrained shape and then .assign
to them:
QUESTION
I am trying to follow the Multiclass classification in GPFlow (using v2.1.3) as described here:
https://gpflow.readthedocs.io/en/master/notebooks/advanced/multiclass_classification.html
The difference with the example is that the X vector is 10-dimensional and the number of classes to predict is 5. But there seems to be error in dimensionality when using the inducing variables. I changed the kernel and use dummy data for reproducability, just looking to get this code to run. I put the dimensions of the variables in case that is the issue. Any calculation of loss causes an error like:
...ANSWER
Answered 2021-Oct-05 at 09:09When running your example I get a slightly different bug, but the issue is in how you define lengthscales and variances. You write:
QUESTION
I wanted to use priors on hyper-parameters as in (https://gpflow.readthedocs.io/en/develop/notebooks/advanced/mcmc.html) but with an SVGP model.
Following the steps of example 1, I got an error when I run de run_chain_fn :
TypeError: maximum_log_likelihood_objective() missing 1 required positional argument: 'data'
Contrary to GPR or SGPMC, the data are not an attribut of the model, they are included as external parameter.
To avoid that problem I modified slightly SVGP class to include data as parameter (I don't care with mini-batching for now)
...ANSWER
Answered 2021-Aug-30 at 18:18SVGP
is a GPflow model for a variational approximation. Using MCMC on the q(u) distribution parameterised by q_mu
and q_sqrt
doesn't make sense (if you want to do MCMC on q(u) in a sparse approximation, use SGPMC
).
You can still put (hyper)priors on the hyperparameters in the SVGP
model; gradient-based optimisation will then lead to the maximum a-posteriori (MAP) point estimate (as opposed to pure maximum likelihood).
QUESTION
I want to implement a binary classification model using Gaussian process. According to the official documentation, I had the code as below.
The X has 2048 features and Y is either 0 or 1. After optimizing the model, I was trying to evaluate the performance.
However, the predict_y
method yields a weird result; the expected pred
should have a shape like (n_test_samples, 2), which represents the probability of being class 0 and 1. But the result I got instead is (n_test_samples, n_training_samples).
What is going wrong?
...ANSWER
Answered 2021-Jul-20 at 10:02I finally figured it out. The reason is the Y for VGP model should have a shape like (n_training_samples, 1) instead of (n_training_samples,).
QUESTION
I would like to construct a multi-output GP, whereby the correlation structure between outputs contains a changepoint. The change should only occur in the correlation structure of the Coregion kernel, whereas the kernels themselves (i.e., lengthscale and family of kernel) should remain the same before and after the change.
Below, I include examples (from the GPflow documentation [1., 2.], and my own [3.]) which:
- have correlation structure between outputs, but no changepoints,
- demonstrates how to construct changepoints in GPflow,
- my attempt at a correlation structure between outputs which contains a change point.
ANSWER
Answered 2021-Jul-01 at 14:12Unfortunately, there is currently no MultiOutput support for ChangePoint kernels in GPflow. In your case, this essentially means that the ChangePoint kernel has no idea on what dimension of your outputs to act, even though the kernels that constitute it have their active_dims
parameters set.
I have a Pull Request in the works to implement this functionality that you can find here: https://github.com/GPflow/GPflow/pull/1671
The change proposed in that pull request simply would require you to add a switch_dim
flag in your call to the ChangePoint kernel, like so:
QUESTION
Suppose I have a trained model
...ANSWER
Answered 2021-Apr-20 at 15:51Yes, the SVGP (as well as VGP) model predictions crucially depend on the q(u) distribution parametrised by model.q_mu
and model.q_sqrt
. You can transfer all parameters (including those two) using
QUESTION
I'm creating some GPflow models in which I need the observations pre and post of a threshold x0
to be independent a priori. I could achieve this with just GP models, or with a ChangePoints kernel with infinite steepness, but both solutions don't work well with my future extensions in mind (MOGP in particular).
I figured I could easily construct what I want from scratch, so I made a new Combination kernel object, which uses the appropriate child kernel pre- or post x0
. This works as intended when I evaluate the kernel on a set of input points; the expected correlations between points before and after threshold are zero, and the rest is determined by the children kernels:
ANSWER
Answered 2021-Mar-31 at 14:11this is not a GPflow issue but a subtlety of TensorFlow's eager vs graph mode: In eager mode (which is the default behaviour when you interact with tensors "manually" as in calling the kernel) K_pre.shape
works just as expected. In graph mode (which is what happens when you wrap code in tf.function()
, this generally does not always work (e.g. the shape might depend on tf.Variables with None shapes), and you have to use tf.shape(K_pre)
instead to obtain the dynamic shape (that depends on the actual values inside the variables). GPflow's Scipy class by default wraps the loss&gradient computation inside tf.function()
to speed up optimization. If you explicitly turn this off by passing compile=False
to the minimize() call, your code example runs fine. If you replace the .shape
attributes with tf.shape()
calls to fix it properly, it likewise will run fine.
QUESTION
I am trying profiling my code in GPflow 2 as I need to know which part of my code consumes the most CPU time. In GPflow 1 there was a gpflowrc file where you could set dump_timeline = True
but this has changed in GPflow 2 (to the gpflow.config
module) and I can't find a similar option in there.
ANSWER
Answered 2021-Feb-22 at 11:54Working with TensorFlow 2 is much simpler than TensorFlow 1, so with GPflow 2 we're relying a lot more on TensorFlow built-ins instead of adding extra code - GPflow 2 is "just another TensorFlow graph". So you should be able to directly use the TensorFlow profiler: see this blog post for an introduction and the guide in the TensorFlow documentation for more details.
(According to https://github.com/tensorflow/tensorboard/issues/2874, TensorBoard's "Trace Viewer" for the timeline should now work fine in Firefox, but if you encounter any issues on the visualization side it's worth trying out Chrome.)
QUESTION
I am supplying different minibatches to optimize a GPflow model (SVGP). If I decorate the optimization_step
with tf.function
I get the following error:
NotImplementedError: Cannot convert a symbolic Tensor (concat:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
In order for the optimizer to run I had to remove the tf.function
decorator, losing the speed-up advantages. What do I need to change so that I can keep using the tf.function
decorator?
The xAndY
input shapes and types are all numpy
arrays.
ANSWER
Answered 2021-Feb-14 at 18:01GPflow's gpflow.optimizers.Scipy()
is a wrapper around Scipy's minimize(), and as it calls into non-TensorFlow operations, you cannot wrap it in tf.function
. Moreover, the optimizers implemented in Scipy's minimize are second-order methods that assume that your gradients are not stochastic, and aren't compatible with minibatching.
If you want to do full-batch optimization with Scipy: The minimize()
method of gpflow.optimizers.Scipy()
, by default, does wrap the objective and gradient computation inside tf.function
(see its compile
argument with default True
). It also does the full optimization, so you only have to call the minimize()
method once (by default it runs until convergence or failure to continue optimization; you can supply a maximum number of iterations using the options=dict(maxiter=1000)
argument).
If you want to use mini-batching: simply use one of the TensorFlow optimizers, such as tf.optimizers.Adam()
, and then your code should run fine including the @tf.function
decorator on your optimization_step()
function (and in that case you do need to call it in a loop as in your example).
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install GPflow
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page