API Library


Module

Model Types

AugmentedGaussianProcesses.GPType
GP(args...; kwargs...)

Gaussian Process

Arguments

  • X : input features, should be a matrix N×D where N is the number of observation and D the number of dimension
  • y : input labels, can be either a vector of labels for multiclass and single output or a matrix for multi-outputs (note that only one likelihood can be applied)
  • kernel : covariance function, can be either a single kernel or a collection of kernels for multiclass and multi-outputs models

Keyword arguments

  • noise : Variance of the likelihood
  • opt_noise : Flag for optimizing the variance by using the formul σ=Σ(y-f)^2/N
  • mean : Option for putting a prior mean
  • verbose : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • IndependentPriors : Flag for setting independent or shared parameters among latent GPs
  • atfrequency : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean : PriorMean object, check the documentation on it MeanPrior
source
AugmentedGaussianProcesses.VGPType
VGP(args...; kwargs...)

Variational Gaussian Process

Arguments

  • X::AbstractArray : Input features, if X is a matrix the choice of colwise/rowwise is given by the obsdim keyword
  • y::AbstractVector : Output labels
  • kernel::Kernel : Covariance function, can be any kernel from KernelFunctions.jl
  • likelihood : Likelihood of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, see the Compatibility Table)

Keyword arguments

  • verbose : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • obsdim::Int=1 : Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
source
AugmentedGaussianProcesses.MCGPType
MCGP(args...; kwargs...)

Monte-Carlo Gaussian Process

Arguments

  • X::AbstractArray : Input features, if X is a matrix the choice of colwise/rowwise is given by the obsdim keyword
  • y::AbstractVector : Output labels
  • kernel::Kernel : Covariance function, can be any kernel from KernelFunctions.jl
  • likelihood : Likelihood of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, at the moment only GibbsSampling is available (see the Compatibility Table)

Keyword arguments

  • verbose::Int : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • obsdim::Int=1 : Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
source
AugmentedGaussianProcesses.SVGPType
SVGP(args...; kwargs...)

Sparse Variational Gaussian Process

Arguments

  • X::AbstractArray : Input features, if X is a matrix the choice of colwise/rowwise is given by the obsdim keyword
  • y::AbstractVector : Output labels
  • kernel::Kernel : Covariance function, can be any kernel from KernelFunctions.jl
  • likelihood : Likelihood of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, see the Compatibility Table)
  • nInducingPoints/Z : number of inducing points, or AbstractVector object

Keyword arguments

  • verbose : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • Zoptimiser : Optimiser for inducing points locations
  • obsdim::Int=1 : Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
source
AugmentedGaussianProcesses.OnlineSVGPType
OnlineSVGP(args...; kwargs...)

Online Sparse Variational Gaussian Process

Arguments

  • kernel::Kernel : Covariance function, can be any kernel from KernelFunctions.jl
  • likelihood : Likelihood of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, see the Compatibility Table)
  • Zalg : Algorithm selecting how inducing points are selected

Keywords arguments

  • verbose : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • Zoptimiser : Optimiser for inducing points locations
  • T::DataType=Float64 : Hint for what the type of the data is going to be.
source
AugmentedGaussianProcesses.MOVGPType
MOVGP(args...; kwargs...)

Multi-Output Variational Gaussian Process

Arguments

  • X::AbstractVector : : Input features, if X is a matrix the choice of colwise/rowwise is given by the obsdim keyword
  • y::AbstractVector{<:AbstractVector} : Output labels, each vector corresponds to one output dimension
  • kernel::Union{Kernel,AbstractVector{<:Kernel} : covariance function or vector of covariance functions, can be either a single kernel or a collection of kernels for multiclass and multi-outputs models
  • likelihood::Union{AbstractLikelihood,Vector{<:Likelihood} : Likelihood or vector of likelihoods of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, for compatibilities see the Compatibility Table)
  • num_latent::Int : Number of latent GPs

Keyword arguments

  • verbose::Int : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Optimisers.jl library. Default is ADAM(0.001)
  • Aoptimiser : Optimiser used for the mixing parameters.
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • obsdim::Int=1 : Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
source
AugmentedGaussianProcesses.MOSVGPType
MOSVGP(args...; kwargs...)

Multi-Output Sparse Variational Gaussian Process

Arguments

  • kernel::Union{Kernel,AbstractVector{<:Kernel} : covariance function or vector of covariance functions, can be either a single kernel or a collection of kernels for multiclass and multi-outputs models
  • likelihoods::Union{AbstractLikelihood,Vector{<:Likelihood} : Likelihood or vector of likelihoods of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, for compatibilities see the Compatibility Table)
  • nLatent::Int : Number of latent GPs
  • nInducingPoints : number of inducing points, or collection of inducing points locations

Keyword arguments

  • verbose::Int : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Optimisers.jl library. Default is ADAM(0.001)
  • Zoptimiser : Optimiser used for the inducing points locations
  • Aoptimiser : Optimiser used for the mixing parameters.
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • obsdim::Int=1 : Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
source
AugmentedGaussianProcesses.VStPType
VStP(args...; kwargs...)

Variational Student-T Process

Arguments

  • X::AbstractArray : Input features, if X is a matrix the choice of colwise/rowwise is given by the obsdim keyword
  • y::AbstractVector : Output labels
  • kernel::Kernel : Covariance function, can be any kernel from KernelFunctions.jl
  • likelihood : Likelihood of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, see the Compatibility Table)
  • ν::Real : Number of degrees of freedom

Keyword arguments

  • verbose : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • obsdim::Int=1 : Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
source

Likelihood Types

AugmentedGaussianProcesses.GaussianLikelihoodType
GaussianLikelihood(σ²::T=1e-3) # σ² is the variance of the noise

Gaussian noise :

\[ p(y|f) = N(y|f,\sigma^2)\]

There is no augmentation needed for this likelihood which is already conjugate to a Gaussian prior.

source
AugmentedGaussianProcesses.StudentTLikelihoodType
StudentTLikelihood(ν::T, σ::Real=one(T))

Arguments

  • ν::Real : degrees of freedom of the student-T
  • σ::Real : standard deviation of the local scale

Student-t likelihood for regression:

\[ p(y|f,ν,σ) = \frac{Γ(\frac{ν+1}{2})}{\sqrt(νπ) σ Γ(\frac{ν}{2})} (1+\frac{(y-f)^2}{σ^2ν})^{(-\frac{ν+1}{2})},\]

where ν is the number of degrees of freedom and σ is the standard deviation for local scale of the data.


For the augmented analytical solution, it is augmented via:

\[ p(y|f,\omega) = N(y|f,\sigma^2 \omega)\]

Where $\omega \sim \mathcal{IG}(\frac{\nu}{2},\frac{\nu}{2})$ where $\mathcal{IG}$ is the inverse-gamma distribution. See paper Robust Gaussian Process Regression with a Student-t Likelihood

source
AugmentedGaussianProcesses.LaplaceLikelihoodType
LaplaceLikelihood(β::T=1.0)  #  Laplace likelihood with scale β

Laplace likelihood for regression:

\[\frac{1}{2\beta} \exp(-\frac{|y-f|}{β})\]

see wiki page

For the analytical solution, it is augmented via:

\[p(y|f,ω) = N(y|f,ω⁻¹)\]

where $ω \sim \text{Exp}(ω | 1/(2 β^2))$, and $\text{Exp}$ is the Exponential distribution We use the variational distribution $q(ω) = GIG(ω|a,b,p)$

source
AugmentedGaussianProcesses.LogisticLikelihoodFunction
LogisticLikelihood() -> BernoulliLikelihood

Bernoulli likelihood with a logistic link for the Bernoulli likelihood

\[ p(y|f) = \sigma(yf) = \frac{1}{1 + \exp(-yf)},\]

(for more info see : wiki page)


For the analytic version the likelihood, it is augmented via:

\[ p(y|f,ω) = \exp\left(\frac{1}{2}(yf - (yf)^2 \omega)\right)\]

where $ω \sim \mathcal{PG}(\omega | 1, 0)$, and $\mathcal{PG}$ is the Polya-Gamma distribution. See paper : Efficient Gaussian Process Classification Using Polya-Gamma Data Augmentation.

source
AugmentedGaussianProcesses.HeteroscedasticLikelihoodFunction
HeteroscedasticLikelihood(λ::T=1.0)->HeteroscedasticGaussianLikelihood

Arguments

  • λ::Real : The maximum precision possible (this is optimized during training)

Gaussian with heteroscedastic noise given by another gp:

\[ p(y|f,g) = \mathcal{N}(y|f,(\lambda \sigma(g))^{-1})\]

Where $\sigma$ is the logistic function

The augmentation is not trivial and will be described in a future paper

source
AugmentedGaussianProcesses.BayesianSVMFunction
BayesianSVM() -> BernoulliLikelihood

The Bayesian SVM is a Bayesian interpretation of the classical SVM.

\[p(y|f) \propto \exp(2 \max(1-yf, 0))\]


For the analytic version of the likelihood, it is augmented via:

\[p(y|f, ω) = \frac{1}{\sqrt(2\pi\omega)} \exp\left(-\frac{(1+\omega-yf)^2}{2\omega})\right)\]

where $ω \sim 1[0,\infty)$ has an improper prior (his posterior is however has a valid distribution, a Generalized Inverse Gaussian). For reference see this paper.

source
AugmentedGaussianProcesses.SoftMaxLikelihoodFunction
SoftMaxLikelihood(num_class::Int) -> MultiClassLikelihood

Arguments

  • num_class::Int : Total number of classes

    SoftMaxLikelihood(labels::AbstractVector) -> MultiClassLikelihood

Arguments

  • labels::AbstractVector : List of classes labels

Multiclass likelihood with Softmax transformation:

\[p(y=i|\{f_k\}_{k=1}^K) = \frac{\exp(f_i)}{\sum_{k=1}^K\exp(f_k)}\]

There is no possible augmentation for this likelihood

source
AugmentedGaussianProcesses.LogisticSoftMaxLikelihoodFunction
LogisticSoftMaxLikelihood(num_class::Int) -> MultiClassLikelihood

Arguments

  • num_class::Int : Total number of classes

    LogisticSoftMaxLikelihood(labels::AbstractVector) -> MultiClassLikelihood

Arguments

  • labels::AbstractVector : List of classes labels

The multiclass likelihood with a logistic-softmax mapping: :

\[p(y=i|\{f_k\}_{1}^{K}) = \frac{\sigma(f_i)}{\sum_{k=1}^k \sigma(f_k)}\]

where $\sigma$ is the logistic function. This likelihood has the same properties as softmax. –-

For the analytical version, the likelihood is augmented multiple times. More details can be found in the paper Multi-Class Gaussian Process Classification Made Conjugate: Efficient Inference via Data Augmentation.

source
GPLikelihoods.PoissonLikelihoodType
PoissonLikelihood(λ::Real)->PoissonLikelihood

Arguments

  • λ::Real : Maximal Poisson rate

Poisson Likelihood where a Poisson distribution is defined at every point in space (careful, it's different from continous Poisson processes).

\[ p(y|f) = \text{Poisson}(y|\lambda \sigma(f))\]

Where $\sigma$ is the logistic function. Augmentation details will be released at some point (open an issue if you want to see them)

source
AugmentedGaussianProcesses.NegBinomialLikelihoodType
NegBinomialLikelihood(r::Real)

Arguments

  • r::Real number of failures until the experiment is stopped

Negative Binomial likelihood with number of failures r

\[ p(y|r, f) = {y + r - 1 \choose y} (1 - \sigma(f))^r \sigma(f)^y,\]

if $r\in \mathbb{N}$ or

\[ p(y|r, f) = \frac{\Gamma(y + r)}{\Gamma(y + 1)\Gamma(r)} (1 - \sigma(f))^r \sigma(f)^y,\]

if $r\in\mathbb{R}$. Where $\sigma$ is the logistic function

Note that this likelihood follows the Wikipedia definition and not the Distributions.jl one.

source

Inference Types

AugmentedGaussianProcesses.AnalyticVIType
AnalyticVI(;ϵ::T=1e-5)

Variational Inference solver for conjugate or conditionally conjugate likelihoods (non-gaussian are made conjugate via augmentation) All data is used at each iteration (use AnalyticSVI for updates using minibatches)

Keywords arguments

  • ϵ::Real : convergence criteria
source
AugmentedGaussianProcesses.AnalyticSVIFunction
AnalyticSVI(nMinibatch::Int; ϵ::T=1e-5, optimiser=RobbinsMonro())

Stochastic Variational Inference solver for conjugate or conditionally conjugate likelihoods (non-gaussian are made conjugate via augmentation). See AnalyticVI for reference

Arguments

  • nMinibatch::Integer : Number of samples per mini-batches

Keywords arguments

  • ϵ::T : convergence criteria
  • optimiser : Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is RobbinsMonro() (ρ=(τ+iter)^-κ)
source
AugmentedGaussianProcesses.GibbsSamplingType
GibbsSampling(;ϵ::T=1e-5, nBurnin::Int=100, thinning::Int=1)

Draw samples from the true posterior via Gibbs Sampling.

Keywords arguments

  • ϵ::T : convergence criteria
  • nBurnin::Int : Number of samples discarded before starting to save samples
  • thinning::Int : Frequency at which samples are saved
source
AugmentedGaussianProcesses.QuadratureVIType
QuadratureVI(;ϵ::T=1e-5, nGaussHermite::Integer=20, clipping=Inf, natural::Bool=true, optimiser=Momentum(0.0001))

Variational Inference solver by approximating gradients via numerical integration via Quadrature

Keyword arguments

  • ϵ::T : convergence criteria
  • nGaussHermite::Int : Number of points for the integral estimation
  • clipping::Real : Limit the gradients values to avoid overshooting
  • natural::Bool : Use natural gradients
  • optimiser : Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is Momentum(0.0001)
source
AugmentedGaussianProcesses.QuadratureSVIFunction
QuadratureSVI(nMinibatch::Int; ϵ::T=1e-5, nGaussHermite::Int=20, clipping=Inf, natural=true, optimiser=Momentum(0.0001))

Stochastic Variational Inference solver by approximating gradients via numerical integration via Gaussian Quadrature. See QuadratureVI for a more detailed reference.

Arguments

-nMinibatch::Integer : Number of samples per mini-batches

Keyword arguments

  • ϵ::T : convergence criteria, which can be user defined
  • nGaussHermite::Int : Number of points for the integral estimation (for the QuadratureVI)
  • natural::Bool : Use natural gradients
  • optimiser : Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is Momentum(0.0001)
source
AugmentedGaussianProcesses.MCIntegrationVIType
MCIntegrationVI(;ϵ::T=1e-5, nMC::Integer=1000, clipping::Real=Inf, natural::Bool=true, optimiser=Momentum(0.001))

Variational Inference solver by approximating gradients via MC Integration. It means the expectation E[log p(y|f)] as well as its gradients is computed by sampling from q(f).

Keyword arguments

  • ϵ::Real : convergence criteria, which can be user defined
  • nMC::Int : Number of samples per data point for the integral evaluation
  • clipping::Real : Limit the gradients values to avoid overshooting
  • natural::Bool : Use natural gradients
  • optimiser : Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is Momentum(0.01)
source
AugmentedGaussianProcesses.MCIntegrationSVIFunction
MCIntegrationSVI(batchsize::Int; ϵ::Real=1e-5, nMC::Integer=1000, clipping=Inf, natural=true, optimiser=Momentum(0.0001))

Stochastic Variational Inference solver by approximating gradients via Monte Carlo integration when using minibatches See MCIntegrationVI for more explanations.

Argument

-batchsize::Integer : Number of samples per mini-batches

Keyword arguments

  • ϵ::T : convergence criteria, which can be user defined
  • nMC::Int : Number of samples per data point for the integral evaluation
  • clipping::Real : Limit the gradients values to avoid overshooting
  • natural::Bool : Use natural gradients
  • optimiser : Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is Momentum() (ρ=(τ+iter)^-κ)
source

Functions and methods

AugmentedGaussianProcesses.train!Function
train!(model::AbstractGPModel; iterations::Integer=100, callback, convergence)

Function to train the given GP model.

Arguments

  • model : AbstractGPModel model with either an Analytic, AnalyticVI or NumericalVI type of inference

Keyword Arguments

  • iterations::Int : Number of iterations (not necessarily epochs!)for training
  • callback::Function=nothing : Callback function called at every iteration. Should be of type function(model,iter) ... end
  • convergence::Function=nothing : Convergence function to be called every iteration, should return a scalar and take the same arguments as callback
source
train!(model::AbstractGPModel, X::AbstractMatrix, y::AbstractArray; obsdim = 1, iterations::Int=10,callback=nothing,conv=0)
train!(model::AbstractGPModel, X::AbstractVector, y::AbstractArray; iterations::Int=20,callback=nothing,conv=0)

Function to train the given GP model.

Keyword Arguments

there are options to change the number of max iterations,

  • iterations::Int : Number of iterations (not necessarily epochs!)for training
  • callback::Function : Callback function called at every iteration. Should be of type function(model,iter) ... end
  • conv::Function : Convergence function to be called every iteration, should return a scalar and take the same arguments as callback
source
Missing docstring.

Missing docstring for sample. Check Documenter's build log for details.

AugmentedGaussianProcesses.predict_fFunction
predict_f(m::AbstractGPModel, X_test, cov::Bool=true, diag::Bool=true)

Compute the mean of the predicted latent distribution of f on X_test for the variational GP model

Return also the diagonal variance if cov=true and the full covariance if diag=false

source
AugmentedGaussianProcesses.predict_yFunction
predict_y(model::AbstractGPModel, X_test::AbstractVector)
predict_y(model::AbstractGPModel, X_test::AbstractMatrix; obsdim = 1)

Return - the predictive mean of X_test for regression - 0 or 1 of X_test for classification - the most likely class for multi-class classification - the expected number of events for an event likelihood

source
AugmentedGaussianProcesses.proba_yFunction
proba_y(model::AbstractGPModel, X_test::AbstractVector)
proba_y(model::AbstractGPModel, X_test::AbstractMatrix; obsdim = 1)

Return the probability distribution p(ytest|model,Xtest) :

- `Tuple{Vector,Vector}` of mean and variance for regression
- `Vector{<:Real}` of probabilities of y_test = 1 for binary classification
- `NTuple{K,<:AbstractVector}`, with element being a vector of probability for one class for multi-class classification
source

Prior Means

AugmentedGaussianProcesses.EmpiricalMeanType
EmpiricalMean(c::AbstractVector{<:Real}=1.0;opt=ADAM(0.01))

Arguments

  • c::AbstractVector : Empirical mean vector

Construct a empirical mean with values c Optionally give an optimiser opt (ADAM(0.01) by default)

source
AugmentedGaussianProcesses.AffineMeanType
AffineMean(w::Vector, b::Real; opt = ADAM(0.01))
AffineMean(dims::Int; opt=ADAM(0.01))

Arguments

  • w::Vector : Weight vector
  • b::Real : Bias
  • dims::Int : Number of features per vector

Construct an affine operation on X : μ₀(X) = X * w + b where w is a vector and b a scalar Optionally give an optimiser opt (Adam(α=0.01) by default)

source

Index