Sections
You are here: Home » Development » Documentation » Components » Gaussian Processes for Regression

Gaussian Processes for Regression

Contact: Stefan Kramer

Categories: Prediction

Exposed methods:

Gaussian Processes for Regression
Input: Instances, feature vectors, real-numbered target values
Output: Regression model
Input format: Dependent on implementation, e.g., Weka's ARFF format
Output format: Dependent on implementation, e.g., Weka's ARFF format
User-specified parameters: Kernel Covariance function, e.g. radial basis function (“squared exponential”)
Reporting information: Performance measures (Correlation coefficient, mean absolute error, root mean squared error, relative absolute error, root relative squared error)

Description:

GPR (Gaussian Processes for Regression) is a way of supervised learning. A Gaussian process is a generalization
of the Gaussian probability distribution. Whereas a probability distribution describes random variables which
are scalars or vectors (for multivariate distributions), a stochastic process governs the properties of functions
Just as a Gaussian distribution is fully specified by its mean and covariance matrix, a Gaussian process
is specified by a mean and a covariance function. Here, the mean is a function of x (which we will often take to
be the zero function), and the covariance is a function C(x, x‟) that expresses the expected covariance between
the values of the function y at the points x and x‟. The function y(x) in any one data modeling problem is
assumed to be a single sample from this Gaussian distribution.
Gaussian processes are already well established models for various spatial and temporal problems – for
example, Brownian motion, Langevin processes and Wiener processes are all examples of Gaussian processes.
Gaussian processes are implementations are available via various software packages and in most programming
languages, e.g. Weka (Java), R, Matlab, python, C, C++.

Bias (instance-selection bias, feature-selection bias, combined instance-selection/feature-selection bias, independence assumptions?, ...)
The chosen covariance function, which encodes the assumption about the function we want to learn, is a bias.

Lazy learning/eager learning
Eager learning

Interpretability of models (black box model?, ...)
Depends on the covariance function (kernel).

Type of Descriptor:

Interfaces:

Priority: Low

Development status:

Homepage:

Dependencies:
External components: WEKA


Technical details

Data: No

Software: Yes

Programming language(s): Java

Operating system(s): Linux, Win, Mac OS

Input format: Dependent on implementation, e.g., Weka's ARFF format

Output format: Dependent on implementation, e.g., Weka's ARFF format

License: GPL


References

References:
[RAS05] Rasmussen, C. E.; Williams, C. K. I. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning); The MIT Press: 2005.

Document Actions