GPflow | Gaussian processes in TensorFlow | Analytics library

 by   GPflow Python Version: 2.9.1 License: Apache-2.0

kandi X-RAY | GPflow Summary

kandi X-RAY | GPflow Summary

GPflow is a Python library typically used in Institutions, Learning, Education, Analytics, Deep Learning, Tensorflow applications. GPflow has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can install using 'pip install GPflow' or download it from GitHub, PyPI.

GPflow is a package for building Gaussian process models in Python. It implements modern Gaussian process inference for composable kernels and likelihoods. GPflow 2.1 builds on TensorFlow 2.2+ and TensorFlow Probability for running computations, which allows fast execution on GPUs. The online documentation (develop)/(master) contains more details.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              GPflow has a medium active ecosystem.
              It has 1723 star(s) with 437 fork(s). There are 76 watchers for this library.
              There were 1 major release(s) in the last 6 months.
              There are 117 open issues and 674 have been closed. On average issues are closed in 127 days. There are 29 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of GPflow is 2.9.1

            kandi-Quality Quality

              GPflow has 0 bugs and 0 code smells.

            kandi-Security Security

              GPflow has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              GPflow code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              GPflow is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              GPflow releases are available to install and integrate.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.
              GPflow saves you 8379 person hours of effort in developing the same functionality from scratch.
              It has 23550 lines of code, 1634 functions and 277 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed GPflow and discovered the below as its top functions. This is intended to give you an instant insight into GPflow implemented functionality, and help decide if they suit your requirements.
            • Computes the gauss kernel of the Gaussian kernel
            • Univariate conditional conditional
            • Performs a conditional conditional with a knn size
            • Recompute a conditional condition
            • Performs the NVD gradients .
            • Conditional intermediate Interdomain .
            • Compute the ndiag quadrature
            • Computes the expected eigenvectors
            • Perform cglb .
            • Returns the upper bound
            Get all kandi verified functions for this library.

            GPflow Key Features

            No Key Features are available at this moment for GPflow.

            GPflow Examples and Code Snippets

            copy iconCopy
              .
              ├── BOCPDMPP_Slides.pdf     # Summary of project in slides
              ├── BOCPDMPP_V2_0.pdf       # Full thesis on BOCPDMPP
              ├── BVAR_NIG.py
              ├── BVAR_NIG_DPD.py
              ├── Evaluation_tool.py
              ├── LGCP_test.py
              ├── cp_probability_model.py
              ├── detector.  
            GrandPrix,Installation
            Jupyter Notebookdot img2Lines of Code : 9dot img2License : Permissive (Apache-2.0)
            copy iconCopy
            pip install tensorflow
            
            git clone https://github.com/GPflow/GPflow.git
            cd GPflow    
            pip install .
            cd
            
            git clone https://github.com/ManchesterBioinference/GrandPrix
            cd GrandPrix
            python setup.py install
            cd
              
            Stochastic Deep Gaussian Processes over Graphs,Prerequests
            Pythondot img3Lines of Code : 5dot img3License : Permissive (MIT)
            copy iconCopy
            python 3.x (3.7 tested)
            conda install tensorflow-gpu==1.15
            pip install keras==2.3.1
            pip install gpflow==1.5
            pip install gpuinfo
              
            GPFlow Multiclass classification with vector inputs causes value error on shape mismatch
            Pythondot img4Lines of Code : 7dot img4License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            lengthscales = [0.1]*num_classes
            variances = [1.0]*num_classes
            kernel = gpflow.kernels.Matern32(variance=variances, lengthscales=lengthscales)
            
            lengthscales = [0.1]*num_of_independent_vars
            kernel = gpflow.kernels.Ma
            GPflow multi-output change-point
            Pythondot img5Lines of Code : 5dot img5License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            k = gpflow.kernels.ChangePoints([k_1, k_2], locations=[0.5], steepness=50.0,
                                            switch_dim=1) # <-- This one!
            
            pip install git+https://github.com/GPflow/GPflow.git@refs/pull/1671/head
            How can I transfer parrameters from one gpflow model to another to gain similar results?
            Pythondot img6Lines of Code : 3dot img6License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            params = gpflow.utilities.parameter_dict(model)
            gpflow.utilities.multiple_assign(model, params)
            
            How to fix some dimensions of a kernel lengthscale in gpflow?
            Pythondot img7Lines of Code : 13dot img7License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            import gpflow
            import tensorflow as tf
            
            class MyKernel(gpflow.kernels.SquaredExponential):  # or whichever kernel you want
                @property
                def lengthscales(self) -> tf.Tensor:
                    return tf.stack([self.lengthscale_0, self.lengthsca
            copy iconCopy
            for _ in range(iter):
                opt.minimize(model.training_loss, model.trainable_variables)
            
            @tf.function
            def optimization_step():
                opt.minimize(model.training_loss, model.trainable_variables)
            
            for _ in range(iter):
             
            Gaussian Process Regression: Mapping a input to a time series
            Pythondot img9Lines of Code : 13dot img9License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            list_1 = [....]
            list_2 = [....]
            ...
            
            k = [-28, 32, ...]
            
            X1 = np.arange(0, len(list_1))
            Y1 = list_1
            
            X2 = np.arange(0, len(list_2))
            Y2 = list_2
            ...
            
            Gaussian progress regression: wrong prediction
            Pythondot img10Lines of Code : 10dot img10License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            
            k1 = gpflow.kernels.SquaredExponential(lengthscales = [0.1], active_dims=[0])
            k2 = gpflow.kernels.SquaredExponential(lengthscales = [0.1], active_dims=[1])
            
            m = gpflow.models.GPR(data=(X, Y), kernel=k1+k2)
            
            
            m.likelihood.variance.assign(0

            Community Discussions

            QUESTION

            Error in prediction formula for GPflow SGPR?
            Asked 2022-Feb-22 at 22:16

            I believe there is an error in the next-to-last equation in this GPflow documentation page. I provide details here. Can this be right?

            ...

            ANSWER

            Answered 2022-Feb-22 at 22:16

            There is a typo in the third-to-the-last equation in this GPflow documentation page, as show in this image, and further explained here. Using this corrected equation, my previous proof of the last equation in this GPflow documentation page greatly simplifies, as shown in this image, and further explained here.

            Source https://stackoverflow.com/questions/70753487

            QUESTION

            How can I compile batched training of a gpflow GPR into a tf.function?
            Asked 2021-Oct-27 at 16:18

            I need to train a GPR model in multiple batches per epoch using a custom loss function. I would like to do this using GPflow and I would like to compile my training using tf.function to increase the efficiency. However, gpflow.GPR must be re-instantiated each time you supply new data, so tf.function will have to re-trace each time. This makes the code slower rather than faster.

            This is the initial setup:

            ...

            ANSWER

            Answered 2021-Oct-27 at 15:11

            You don't have to re-instantiate GPR each time. You can construct tf.Variable holders with unconstrained shape and then .assign to them:

            Source https://stackoverflow.com/questions/69740026

            QUESTION

            GPFlow Multiclass classification with vector inputs causes value error on shape mismatch
            Asked 2021-Oct-05 at 09:09

            I am trying to follow the Multiclass classification in GPFlow (using v2.1.3) as described here:

            https://gpflow.readthedocs.io/en/master/notebooks/advanced/multiclass_classification.html

            The difference with the example is that the X vector is 10-dimensional and the number of classes to predict is 5. But there seems to be error in dimensionality when using the inducing variables. I changed the kernel and use dummy data for reproducability, just looking to get this code to run. I put the dimensions of the variables in case that is the issue. Any calculation of loss causes an error like:

            ...

            ANSWER

            Answered 2021-Oct-05 at 09:09

            When running your example I get a slightly different bug, but the issue is in how you define lengthscales and variances. You write:

            Source https://stackoverflow.com/questions/69443821

            QUESTION

            Use of priors on hyper-parameters with SVGP model
            Asked 2021-Aug-30 at 18:18

            I wanted to use priors on hyper-parameters as in (https://gpflow.readthedocs.io/en/develop/notebooks/advanced/mcmc.html) but with an SVGP model.

            Following the steps of example 1, I got an error when I run de run_chain_fn :

            TypeError: maximum_log_likelihood_objective() missing 1 required positional argument: 'data'

            Contrary to GPR or SGPMC, the data are not an attribut of the model, they are included as external parameter.

            To avoid that problem I modified slightly SVGP class to include data as parameter (I don't care with mini-batching for now)

            ...

            ANSWER

            Answered 2021-Aug-30 at 18:18

            SVGP is a GPflow model for a variational approximation. Using MCMC on the q(u) distribution parameterised by q_mu and q_sqrt doesn't make sense (if you want to do MCMC on q(u) in a sparse approximation, use SGPMC).

            You can still put (hyper)priors on the hyperparameters in the SVGP model; gradient-based optimisation will then lead to the maximum a-posteriori (MAP) point estimate (as opposed to pure maximum likelihood).

            Source https://stackoverflow.com/questions/68984544

            QUESTION

            gpflow classification implementation
            Asked 2021-Jul-20 at 10:02

            I want to implement a binary classification model using Gaussian process. According to the official documentation, I had the code as below.

            The X has 2048 features and Y is either 0 or 1. After optimizing the model, I was trying to evaluate the performance.

            However, the predict_y method yields a weird result; the expected pred should have a shape like (n_test_samples, 2), which represents the probability of being class 0 and 1. But the result I got instead is (n_test_samples, n_training_samples).

            What is going wrong?

            ...

            ANSWER

            Answered 2021-Jul-20 at 10:02

            I finally figured it out. The reason is the Y for VGP model should have a shape like (n_training_samples, 1) instead of (n_training_samples,).

            Source https://stackoverflow.com/questions/68419128

            QUESTION

            GPflow multi-output change-point
            Asked 2021-Jul-01 at 14:12

            I would like to construct a multi-output GP, whereby the correlation structure between outputs contains a changepoint. The change should only occur in the correlation structure of the Coregion kernel, whereas the kernels themselves (i.e., lengthscale and family of kernel) should remain the same before and after the change.

            Below, I include examples (from the GPflow documentation [1., 2.], and my own [3.]) which:

            1. have correlation structure between outputs, but no changepoints,
            2. demonstrates how to construct changepoints in GPflow,
            3. my attempt at a correlation structure between outputs which contains a change point.
            ...

            ANSWER

            Answered 2021-Jul-01 at 14:12

            Unfortunately, there is currently no MultiOutput support for ChangePoint kernels in GPflow. In your case, this essentially means that the ChangePoint kernel has no idea on what dimension of your outputs to act, even though the kernels that constitute it have their active_dims parameters set.

            I have a Pull Request in the works to implement this functionality that you can find here: https://github.com/GPflow/GPflow/pull/1671

            The change proposed in that pull request simply would require you to add a switch_dim flag in your call to the ChangePoint kernel, like so:

            Source https://stackoverflow.com/questions/68209521

            QUESTION

            How can I transfer parrameters from one gpflow model to another to gain similar results?
            Asked 2021-Apr-20 at 15:51

            Suppose I have a trained model

            ...

            ANSWER

            Answered 2021-Apr-20 at 15:51

            Yes, the SVGP (as well as VGP) model predictions crucially depend on the q(u) distribution parametrised by model.q_mu and model.q_sqrt. You can transfer all parameters (including those two) using

            Source https://stackoverflow.com/questions/67182015

            QUESTION

            GPflow 2 custom kernel construction: fine upon construction, but kernel of size None in optimization
            Asked 2021-Mar-31 at 14:11

            I'm creating some GPflow models in which I need the observations pre and post of a threshold x0 to be independent a priori. I could achieve this with just GP models, or with a ChangePoints kernel with infinite steepness, but both solutions don't work well with my future extensions in mind (MOGP in particular).

            I figured I could easily construct what I want from scratch, so I made a new Combination kernel object, which uses the appropriate child kernel pre- or post x0. This works as intended when I evaluate the kernel on a set of input points; the expected correlations between points before and after threshold are zero, and the rest is determined by the children kernels:

            ...

            ANSWER

            Answered 2021-Mar-31 at 14:11

            this is not a GPflow issue but a subtlety of TensorFlow's eager vs graph mode: In eager mode (which is the default behaviour when you interact with tensors "manually" as in calling the kernel) K_pre.shape works just as expected. In graph mode (which is what happens when you wrap code in tf.function(), this generally does not always work (e.g. the shape might depend on tf.Variables with None shapes), and you have to use tf.shape(K_pre) instead to obtain the dynamic shape (that depends on the actual values inside the variables). GPflow's Scipy class by default wraps the loss&gradient computation inside tf.function() to speed up optimization. If you explicitly turn this off by passing compile=False to the minimize() call, your code example runs fine. If you replace the .shape attributes with tf.shape() calls to fix it properly, it likewise will run fine.

            Source https://stackoverflow.com/questions/66889296

            QUESTION

            How do I profile my code in GPflow2? What happened to dump_timeline and gpflowrc?
            Asked 2021-Feb-27 at 15:31

            I am trying profiling my code in GPflow 2 as I need to know which part of my code consumes the most CPU time. In GPflow 1 there was a gpflowrc file where you could set dump_timeline = True but this has changed in GPflow 2 (to the gpflow.config module) and I can't find a similar option in there.

            ...

            ANSWER

            Answered 2021-Feb-22 at 11:54

            Working with TensorFlow 2 is much simpler than TensorFlow 1, so with GPflow 2 we're relying a lot more on TensorFlow built-ins instead of adding extra code - GPflow 2 is "just another TensorFlow graph". So you should be able to directly use the TensorFlow profiler: see this blog post for an introduction and the guide in the TensorFlow documentation for more details.

            (According to https://github.com/tensorflow/tensorboard/issues/2874, TensorBoard's "Trace Viewer" for the timeline should now work fine in Firefox, but if you encounter any issues on the visualization side it's worth trying out Chrome.)

            Source https://stackoverflow.com/questions/66314977

            QUESTION

            Why is GPflow's Scipy optimizer incompatible with decorating the optimization step with tf.function?
            Asked 2021-Feb-15 at 07:40

            I am supplying different minibatches to optimize a GPflow model (SVGP). If I decorate the optimization_step with tf.function I get the following error:

            NotImplementedError: Cannot convert a symbolic Tensor (concat:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

            In order for the optimizer to run I had to remove the tf.function decorator, losing the speed-up advantages. What do I need to change so that I can keep using the tf.function decorator?

            The xAndY input shapes and types are all numpy arrays.

            ...

            ANSWER

            Answered 2021-Feb-14 at 18:01

            GPflow's gpflow.optimizers.Scipy() is a wrapper around Scipy's minimize(), and as it calls into non-TensorFlow operations, you cannot wrap it in tf.function. Moreover, the optimizers implemented in Scipy's minimize are second-order methods that assume that your gradients are not stochastic, and aren't compatible with minibatching.

            If you want to do full-batch optimization with Scipy: The minimize() method of gpflow.optimizers.Scipy(), by default, does wrap the objective and gradient computation inside tf.function (see its compile argument with default True). It also does the full optimization, so you only have to call the minimize() method once (by default it runs until convergence or failure to continue optimization; you can supply a maximum number of iterations using the options=dict(maxiter=1000) argument).

            If you want to use mini-batching: simply use one of the TensorFlow optimizers, such as tf.optimizers.Adam(), and then your code should run fine including the @tf.function decorator on your optimization_step() function (and in that case you do need to call it in a loop as in your example).

            Source https://stackoverflow.com/questions/66191633

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install GPflow

            There is an "Intro to GPflow 2.0" Jupyter notebook; check it out for details. To convert your code from GPflow 1 check the GPflow 2 upgrade guide.

            Support

            Bugs, feature requests, pain points, annoying design quirks, etc: Please use GitHub issues to flag up bugs/issues/pain points, suggest new features, and discuss anything else related to the use of GPflow that in some sense involves changing the GPflow code itself. You can make use of the labels such as bug, discussion, feature, feedback, etc. We positively welcome comments or concerns about usability, and suggestions for changes at any level of design. We aim to respond to issues promptly, but if you believe we may have forgotten about an issue, please feel free to add another comment to remind us. "How-to-use" questions: Please use Stack Overflow (gpflow tag) to ask questions that relate to "how to use GPflow", i.e. questions of understanding rather than issues that require changing GPflow code. (If you are unsure where to ask, you are always welcome to open a GitHub issue; we may then ask you to move your question to Stack Overflow.).
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install gpflow

          • CLONE
          • HTTPS

            https://github.com/GPflow/GPflow.git

          • CLI

            gh repo clone GPflow/GPflow

          • sshUrl

            git@github.com:GPflow/GPflow.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link