A Neural Architecture for Learning Uncertainty
This text presents a novel architecture for binary classification.
The idea is the combination of the Gauss-Helmert model with Fisher's
linear discriminant/least squares techniques.  The result of this
combination is a linear classifier that is able to use prior knowledge
in the form of information about the uncertainty of the input data
points for the learning of the decision boundary.  This uncertainty
information is usually given in the form of a covariance matrix and it
is possible to use different covariance matrices for different points.
The use of error propagation enables this system not only to use
uncertainty information for the learning, it also results in
uncertainty information for the decision boundary.  The uncertainty
information of the decision boundary can be used to calculate a
measure of confidence for the classification result for a new point.
Alternatively it can be used by a statistical test to determine
whether a new point can be classified with sufficient reliability or
not.  Besides the linear classifier, an extension to non-linear
classification is presented too.  This extension is based on the ideas
of radial basis function (RBF) networks.  Some initial experiments
giving a proof of concept are reported as well.