API Library


Module

Model Types

AugmentedGaussianProcesses.GPType
GP(args...; kwargs...)

Gaussian Process

Arguments

  • X : input features, should be a matrix N×D where N is the number of observation and D the number of dimension
  • y : input labels, can be either a vector of labels for multiclass and single output or a matrix for multi-outputs (note that only one likelihood can be applied)
  • kernel : covariance function, can be either a single kernel or a collection of kernels for multiclass and multi-outputs models

Keyword arguments

  • noise : Variance of the likelihood
  • opt_noise : Flag for optimizing the variance by using the formul σ=Σ(y-f)^2/N
  • mean : Option for putting a prior mean
  • verbose : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • IndependentPriors : Flag for setting independent or shared parameters among latent GPs
  • atfrequency : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean : PriorMean object, check the documentation on it MeanPrior
source
AugmentedGaussianProcesses.VGPType
VGP(args...; kwargs...)

Variational Gaussian Process

Arguments

  • X::AbstractArray : Input features, if X is a matrix the choice of colwise/rowwise is given by the obsdim keyword
  • y::AbstractVector : Output labels
  • kernel::Kernel : Covariance function, can be any kernel from KernelFunctions.jl
  • likelihood : Likelihood of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, see the Compatibility Table)

Keyword arguments

  • verbose : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • obsdim::Int=1 : Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
source
AugmentedGaussianProcesses.MCGPType
MCGP(args...; kwargs...)

Monte-Carlo Gaussian Process

Arguments

  • X::AbstractArray : Input features, if X is a matrix the choice of colwise/rowwise is given by the obsdim keyword
  • y::AbstractVector : Output labels
  • kernel::Kernel : Covariance function, can be any kernel from KernelFunctions.jl
  • likelihood : Likelihood of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, at the moment only GibbsSampling is available (see the Compatibility Table)

Keyword arguments

  • verbose::Int : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • obsdim::Int=1 : Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
source
AugmentedGaussianProcesses.SVGPType
SVGP(args...; kwargs...)

Sparse Variational Gaussian Process

Arguments

  • X::AbstractArray : Input features, if X is a matrix the choice of colwise/rowwise is given by the obsdim keyword
  • y::AbstractVector : Output labels
  • kernel::Kernel : Covariance function, can be any kernel from KernelFunctions.jl
  • likelihood : Likelihood of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, see the Compatibility Table)
  • nInducingPoints/Z : number of inducing points, or AbstractVector object

Keyword arguments

  • verbose : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • Zoptimiser : Optimiser for inducing points locations
  • obsdim::Int=1 : Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
source
AugmentedGaussianProcesses.OnlineSVGPType
OnlineSVGP(args...; kwargs...)

Online Sparse Variational Gaussian Process

Arguments

  • kernel::Kernel : Covariance function, can be any kernel from KernelFunctions.jl
  • likelihood : Likelihood of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, see the Compatibility Table)
  • Zalg : Algorithm selecting how inducing points are selected

Keywords arguments

  • verbose : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • Zoptimiser : Optimiser for inducing points locations
  • T::DataType=Float64 : Hint for what the type of the data is going to be.
source
AugmentedGaussianProcesses.MOVGPType
MOVGP(args...; kwargs...)

Multi-Output Variational Gaussian Process

Arguments

  • X::AbstractArray : : Input features, if X is a matrix the choice of colwise/rowwise is given by the obsdim keyword
  • y::AbstractVector{<:AbstractVector} : Output labels, each vector corresponds to one output dimension
  • kernel::Union{Kernel,AbstractVector{<:Kernel} : covariance function or vector of covariance functions, can be either a single kernel or a collection of kernels for multiclass and multi-outputs models
  • likelihood::Union{AbstractLikelihood,Vector{<:Likelihood} : Likelihood or vector of likelihoods of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, for compatibilities see the Compatibility Table)
  • nLatent::Int : Number of latent GPs

Keyword arguments

  • verbose::Int : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • Aoptimiser : Optimiser used for the mixing parameters.
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • obsdim::Int=1 : Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
source
AugmentedGaussianProcesses.MOSVGPType
MOSVGP(args...; kwargs...)

Multi-Output Sparse Variational Gaussian Process

Arguments

  • X::AbstractArray : : Input features, if X is a matrix the choice of colwise/rowwise is given by the obsdim keyword
  • y::AbstractVector{<:AbstractVector} : Output labels, each vector corresponds to one output dimension
  • kernel::Union{Kernel,AbstractVector{<:Kernel} : covariance function or vector of covariance functions, can be either a single kernel or a collection of kernels for multiclass and multi-outputs models
  • likelihood::Union{AbstractLikelihood,Vector{<:Likelihood} : Likelihood or vector of likelihoods of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, for compatibilities see the Compatibility Table)
  • nLatent::Int : Number of latent GPs
  • nInducingPoints : number of inducing points, or collection of inducing points locations

Keyword arguments

  • verbose::Int : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • Zoptimiser : Optimiser used for the inducing points locations
  • Aoptimiser : Optimiser used for the mixing parameters.
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • obsdim::Int=1 : Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
source
AugmentedGaussianProcesses.VStPType
VStP(args...; kwargs...)

Variational Student-T Process

Arguments

  • X::AbstractArray : Input features, if X is a matrix the choice of colwise/rowwise is given by the obsdim keyword
  • y::AbstractVector : Output labels
  • kernel::Kernel : Covariance function, can be any kernel from KernelFunctions.jl
  • likelihood : Likelihood of the model. For compatibilities, see Likelihood Types
  • inference : Inference for the model, see the Compatibility Table)
  • ν::Real : Number of degrees of freedom

Keyword arguments

  • verbose : How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)
  • optimiser : Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is ADAM(0.001)
  • atfrequency::Int=1 : Choose how many variational parameters iterations are between hyperparameters optimization
  • mean=ZeroMean() : PriorMean object, check the documentation on it MeanPrior
  • obsdim::Int=1 : Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
source

Likelihood Types

AugmentedGaussianProcesses.GaussianLikelihoodType
GaussianLikelihood(σ²::T=1e-3) # σ² is the **variance**

Gaussian noise :

\[ p(y|f) = N(y|f,σ²)\]

There is no augmentation needed for this likelihood which is already conjugate to a Gaussian prior

source
AugmentedGaussianProcesses.StudentTLikelihoodType
StudentTLikelihood(ν::T, σ::Real=one(T))

Arguments

  • ν::Real : degrees of freedom of the student-T
  • σ::Real : standard deviation of the local scale

Student-t likelihood for regression:

\[ p(y|f,ν,σ) = Γ(0.5(ν+1))/(sqrt(νπ) σ Γ(0.5ν)) * (1+(y-f)^2/(σ^2ν))^(-0.5(ν+1))\]

ν is the number of degrees of freedom and σ is the standard deviation for local scale of the data.


For the analytical solution, it is augmented via:

\[ p(y|f,ω) = N(y|f,σ^2 ω)\]

Where ω ~ IG(0.5ν,,0.5ν) where IG is the inverse gamma distribution See paper Robust Gaussian Process Regression with a Student-t Likelihood

source
AugmentedGaussianProcesses.HeteroscedasticLikelihoodType
HeteroscedasticLikelihood(λ::T=1.0)

Arguments

  • λ::Real : The maximum precision possible (this is optimized during training)

Gaussian with heteroscedastic noise given by another gp:

\[ p(y|f,g) = N(y|f,(λ σ(g))⁻¹)\]

Where σ is the logistic function

The augmentation is not trivial and will be described in a future paper

source
AugmentedGaussianProcesses.BayesianSVMType
BayesianSVM()

The Bayesian SVM is a Bayesian interpretation of the classical SVM.

\[p(y|f) \propto \exp(2 \max(1-yf, 0)) ```` --- For the analytic version of the likelihood, it is augmented via: \]

math p(y|f, ω) = \frac{1}{\sqrt(2\pi\omega) \exp(-\frac{(1+\omega-yf)^2}{2\omega})) ```

where $ω ∼ 𝟙[0,∞)$ has an improper prior (his posterior is however has a valid distribution, a Generalized Inverse Gaussian). For reference see this paper

source
AugmentedGaussianProcesses.LogisticSoftMaxLikelihoodType
LogisticSoftMaxLikelihood(num_class::Int)

Arguments

  • num_class::Int : Total number of classes

The multiclass likelihood with a logistic-softmax mapping: :

\[p(y=i|{fₖ}₁ᴷ) = σ(fᵢ)/∑ₖ σ(fₖ)\]

where σ is the logistic function. This likelihood has the same properties as softmax. –-

For the analytical version, the likelihood is augmented multiple times. More details can be found in the paper Multi-Class Gaussian Process Classification Made Conjugate: Efficient Inference via Data Augmentation

source
AugmentedGaussianProcesses.PoissonLikelihoodType
Poisson Likelihood(λ=1.0)

Arguments

  • λ::Real : Poisson rate

Poisson Likelihood where a Poisson distribution is defined at every point in space (careful, it's different from continous Poisson processes)

\[ p(y|f) = Poisson(y|\lambda \sigma(f))\]

Where σ is the logistic function. Augmentation details will be released at some point (open an issue if you want to see them)

source

Inference Types

AugmentedGaussianProcesses.AnalyticVIType
AnalyticVI(;ϵ::T=1e-5)

Variational Inference solver for conjugate or conditionally conjugate likelihoods (non-gaussian are made conjugate via augmentation) All data is used at each iteration (use AnalyticSVI for updates using minibatches)

Keywords arguments

  • ϵ::Real : convergence criteria
source
AugmentedGaussianProcesses.AnalyticSVIFunction
AnalyticSVI(nMinibatch::Int; ϵ::T=1e-5, optimiser=RobbinsMonro())

Stochastic Variational Inference solver for conjugate or conditionally conjugate likelihoods (non-gaussian are made conjugate via augmentation). See AnalyticVI for reference

Arguments

  • nMinibatch::Integer : Number of samples per mini-batches

Keywords arguments

  • ϵ::T : convergence criteria
  • optimiser : Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is RobbinsMonro() (ρ=(τ+iter)^-κ)
source
AugmentedGaussianProcesses.GibbsSamplingType
GibbsSampling(;ϵ::T=1e-5, nBurnin::Int=100, thinning::Int=1)

Draw samples from the true posterior via Gibbs Sampling.

Keywords arguments

  • ϵ::T : convergence criteria
  • nBurnin::Int : Number of samples discarded before starting to save samples
  • thinning::Int : Frequency at which samples are saved
source
AugmentedGaussianProcesses.QuadratureVIType
QuadratureVI(;ϵ::T=1e-5, nGaussHermite::Integer=20, clipping=Inf, natural::Bool=true, optimiser=Momentum(0.0001))

Variational Inference solver by approximating gradients via numerical integration via Quadrature

Keyword arguments

  • ϵ::T : convergence criteria
  • nGaussHermite::Int : Number of points for the integral estimation
  • clipping::Real : Limit the gradients values to avoid overshooting
  • natural::Bool : Use natural gradients
  • optimiser : Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is Momentum(0.0001)
source
AugmentedGaussianProcesses.QuadratureSVIFunction
QuadratureSVI(nMinibatch::Int; ϵ::T=1e-5, nGaussHermite::Int=20, clipping=Inf, natural=true, optimiser=Momentum(0.0001))

Stochastic Variational Inference solver by approximating gradients via numerical integration via Gaussian Quadrature. See QuadratureVI for a more detailed reference.

Arguments

-nMinibatch::Integer : Number of samples per mini-batches

Keyword arguments

  • ϵ::T : convergence criteria, which can be user defined
  • nGaussHermite::Int : Number of points for the integral estimation (for the QuadratureVI)
  • natural::Bool : Use natural gradients
  • optimiser : Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is Momentum(0.0001)
source
AugmentedGaussianProcesses.MCIntegrationVIType
MCIntegrationVI(;ϵ::T=1e-5, nMC::Integer=1000, clipping::Real=Inf, natural::Bool=true, optimiser=Momentum(0.001))

Variational Inference solver by approximating gradients via MC Integration. It means the expectation E[log p(y|f)] as well as its gradients is computed by sampling from q(f).

Keyword arguments

  • ϵ::Real : convergence criteria, which can be user defined
  • nMC::Int : Number of samples per data point for the integral evaluation
  • clipping::Real : Limit the gradients values to avoid overshooting
  • natural::Bool : Use natural gradients
  • optimiser : Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is Momentum(0.01)
source
AugmentedGaussianProcesses.MCIntegrationSVIFunction
MCIntegrationSVI(nMinibatch::Int; ϵ::Real=1e-5, nMC::Integer=1000, clipping=Inf, natural=true, optimiser=Momentum(0.0001))

Stochastic Variational Inference solver by approximating gradients via Monte Carlo integration when using minibatches See MCIntegrationVI for more explanations.

Argument

-nMinibatch::Integer : Number of samples per mini-batches

Keyword arguments

  • ϵ::T : convergence criteria, which can be user defined
  • nMC::Int : Number of samples per data point for the integral evaluation
  • clipping::Real : Limit the gradients values to avoid overshooting
  • natural::Bool : Use natural gradients
  • optimiser : Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default is Momentum() (ρ=(τ+iter)^-κ)
source

Functions and methods

AugmentedGaussianProcesses.train!Function
train!(model::AbstractGP; iterations::Integer=100, callback, convergence)

Function to train the given GP model.

Arguments

  • model : AbstractGP model with either an Analytic, AnalyticVI or NumericalVI type of inference

Keyword Arguments

  • iterations::Int : Number of iterations (not necessarily epochs!)for training
  • callback::Function=nothing : Callback function called at every iteration. Should be of type function(model,iter) ... end
  • convergence::Function=nothing : Convergence function to be called every iteration, should return a scalar and take the same arguments as callback
source
train!(model::AbstractGP, X::AbstractMatrix, y::AbstractVector;obsdim = 1, iterations::Int=10,callback=nothing,conv=0)
train!(model::AbstractGP, X::AbstractVector, y::AbstractVector;iterations::Int=20,callback=nothing,conv=0)

Function to train the given GP model.

Keyword Arguments

there are options to change the number of max iterations,

  • iterations::Int : Number of iterations (not necessarily epochs!)for training
  • callback::Function : Callback function called at every iteration. Should be of type function(model,iter) ... end
  • conv::Function : Convergence function to be called every iteration, should return a scalar and take the same arguments as callback
source
Missing docstring.

Missing docstring for sample. Check Documenter's build log for details.

AugmentedGaussianProcesses.predict_fFunction
predict_f(m::AbstractGP, X_test, cov::Bool=true, diag::Bool=true)

Compute the mean of the predicted latent distribution of f on X_test for the variational GP model

Return also the diagonal variance if cov=true and the full covariance if diag=false

source
AugmentedGaussianProcesses.predict_yFunction
predict_y(model::AbstractGP, X_test::AbstractVector)
predict_y(model::AbstractGP, X_test::AbstractMatrix; obsdim = 1)

Return - the predictive mean of X_test for regression - 0 or 1 of X_test for classification - the most likely class for multi-class classification - the expected number of events for an event likelihood

source
Missing docstring.

Missing docstring for proba_y. Check Documenter's build log for details.

Prior Means

AugmentedGaussianProcesses.EmpiricalMeanType
EmpiricalMean(c::AbstractVector{<:Real}=1.0;opt=ADAM(0.01))

Arguments

  • c::AbstractVector : Empirical mean vector

Construct a empirical mean with values c Optionally give an optimiser opt (ADAM(0.01) by default)

source
AugmentedGaussianProcesses.AffineMeanType
AffineMean(w::Vector, b::Real; opt = ADAM(0.01))
AffineMean(dims::Int; opt=ADAM(0.01))

Arguments

  • w::Vector : Weight vector
  • b::Real : Bias
  • dims::Int : Number of features per vector

Construct an affine operation on X : μ₀(X) = X * w + b where w is a vector and b a scalar Optionally give an optimiser opt (Adam(α=0.01) by default)

source

Index