For computer science, in statistical learning theory, a representer theorem is any of several related results stating that a minimizer
of a regularized empirical risk functional defined over a reproducing kernel Hilbert space can be represented as a finite linear combination of kernel products evaluated on the input points in the training set data.
Formal statement
The following Representer Theorem and its proof are due to Schölkopf, Herbrich, and Smola: [1]
Theorem: Consider a positive-definite real-valued kernel
on a non-empty set
with a corresponding reproducing kernel Hilbert space
. Let there be given
- a training sample
, - a strictly increasing real-valued function
, and - an arbitrary error function
,
which together define the following regularized empirical risk functional on
:
![{\displaystyle f\mapsto E\left((x_{1},y_{1},f(x_{1})),\ldots ,(x_{n},y_{n},f(x_{n}))\right)+g\left(\lVert f\rVert \right).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e610afa5e8e3238aca8d9bfe301e9bbf9f6a7730)
Then, any minimizer of the empirical risk
![{\displaystyle f^{*}={\underset {f\in H_{k}}{\operatorname {argmin} }}\left\lbrace E\left((x_{1},y_{1},f(x_{1})),\ldots ,(x_{n},y_{n},f(x_{n}))\right)+g\left(\lVert f\rVert \right)\right\rbrace ,\quad (*)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3fe2bf4869710ab118c4e720181809e06dfbcec2)
admits a representation of the form:
![{\displaystyle f^{*}(\cdot )=\sum _{i=1}^{n}\alpha _{i}k(\cdot ,x_{i}),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/499c9aaaf7e5c22e292b1969874239b694ae262b)
where
for all
.
Proof: Define a mapping
![{\displaystyle {\begin{aligned}\varphi \colon {\mathcal {X}}&\to H_{k}\\\varphi (x)&=k(\cdot ,x)\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f4d398873aa904655530f04af11402833c2e5af9)
(so that
is itself a map
). Since
is a reproducing kernel, then
![{\displaystyle \varphi (x)(x')=k(x',x)=\langle \varphi (x'),\varphi (x)\rangle ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/af9afd65864e25c221faaf11050be0226833c95d)
where
is the inner product on
.
Given any
, one can use orthogonal projection to decompose any
into a sum of two functions, one lying in
, and the other lying in the orthogonal complement:
![{\displaystyle f=\sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})+v,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9fe8b8a22a853ba4cb874814a4ebaa8c625af772)
where
for all
.
The above orthogonal decomposition and the reproducing property together show that applying
to any training point
produces
![{\displaystyle f(x_{j})=\left\langle \sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})+v,\varphi (x_{j})\right\rangle =\sum _{i=1}^{n}\alpha _{i}\langle \varphi (x_{i}),\varphi (x_{j})\rangle ,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/25c37b3870af7b97b5ea04f3f7de5e519a7684ac)
which we observe is independent of
. Consequently, the value of the error function
in (*) is likewise independent of
. For the second term (the regularization term), since
is orthogonal to
and
is strictly monotonic, we have
![{\displaystyle {\begin{aligned}g\left(\lVert f\rVert \right)&=g\left(\lVert \sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})+v\rVert \right)\\&=g\left({\sqrt {\lVert \sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})\rVert ^{2}+\lVert v\rVert ^{2}}}\right)\\&\geq g\left(\lVert \sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})\rVert \right).\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/682867edbc268e8afd5284a4250789d3d0e0a8bd)
Therefore setting
does not affect the first term of (*), while it strictly decreases the second term. Consequently, any minimizer
in (*) must have
, i.e., it must be of the form
![{\displaystyle f^{*}(\cdot )=\sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})=\sum _{i=1}^{n}\alpha _{i}k(\cdot ,x_{i}),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6a4f116f07a5980984a805e842866eda7442fb4f)
which is the desired result.
Generalizations
The Theorem stated above is a particular example of a family of results that are collectively referred to as "representer theorems"; here we describe several such.
The first statement of a representer theorem was due to Kimeldorf and Wahba for the special case in which
![{\displaystyle {\begin{aligned}E\left((x_{1},y_{1},f(x_{1})),\ldots ,(x_{n},y_{n},f(x_{n}))\right)&={\frac {1}{n}}\sum _{i=1}^{n}(f(x_{i})-y_{i})^{2},\\g(\lVert f\rVert )&=\lambda \lVert f\rVert ^{2}\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf4296b26b1d54f3a61dc4053c579111cccea4bb)
for
. Schölkopf, Herbrich, and Smola generalized this result by relaxing the assumption of the squared-loss cost and allowing the regularizer to be any strictly monotonically increasing function
of the Hilbert space norm.
It is possible to generalize further by augmenting the regularized empirical risk functional through the addition of unpenalized offset terms. For example, Schölkopf, Herbrich, and Smola also consider the minimization
![{\displaystyle {\tilde {f}}^{*}=\operatorname {argmin} \left\lbrace E\left((x_{1},y_{1},{\tilde {f}}(x_{1})),\ldots ,(x_{n},y_{n},{\tilde {f}}(x_{n}))\right)+g\left(\lVert f\rVert \right)\mid {\tilde {f}}=f+h\in H_{k}\oplus \operatorname {span} \lbrace \psi _{p}\mid 1\leq p\leq M\rbrace \right\rbrace ,\quad (\dagger )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bbf9001771a1b4031666a99c3057a31f4398b2aa)
i.e., we consider functions of the form
, where
and
is an unpenalized function lying in the span of a finite set of real-valued functions
. Under the assumption that the
matrix
has rank
, they show that the minimizer
in
admits a representation of the form
![{\displaystyle {\tilde {f}}^{*}(\cdot )=\sum _{i=1}^{n}\alpha _{i}k(\cdot ,x_{i})+\sum _{p=1}^{M}\beta _{p}\psi _{p}(\cdot )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a35ed25fa05707bdb88eeaa3f669fcb7b775b1cb)
where
and the
are all uniquely determined.
The conditions under which a representer theorem exists were investigated by Argyriou, Micchelli, and Pontil, who proved the following:
Theorem: Let
be a nonempty set,
a positive-definite real-valued kernel on
with corresponding reproducing kernel Hilbert space
, and let
be a differentiable regularization function. Then given a training sample
and an arbitrary error function
, a minimizer
![{\displaystyle f^{*}={\underset {f\in H_{k}}{\operatorname {argmin} }}\left\lbrace E\left((x_{1},y_{1},f(x_{1})),\ldots ,(x_{n},y_{n},f(x_{n}))\right)+R(f)\right\rbrace \quad (\ddagger )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d0c1df4b1689c5217f42f6b274113f58d693b80f)
of the regularized empirical risk admits a representation of the form
![{\displaystyle f^{*}(\cdot )=\sum _{i=1}^{n}\alpha _{i}k(\cdot ,x_{i}),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/499c9aaaf7e5c22e292b1969874239b694ae262b)
where
for all
, if and only if there exists a nondecreasing function
for which
![{\displaystyle R(f)=h(\lVert f\rVert ).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8fd5ef526953dfed0c2d77e8a991ce2f54d755ef)
Effectively, this result provides a necessary and sufficient condition on a differentiable regularizer
under which the corresponding regularized empirical risk minimization
will have a representer theorem. In particular, this shows that a broad class of regularized risk minimizations (much broader than those originally considered by Kimeldorf and Wahba) have representer theorems.
Applications
Representer theorems are useful from a practical standpoint because they dramatically simplify the regularized empirical risk minimization problem
. In most interesting applications, the search domain
for the minimization will be an infinite-dimensional subspace of
, and therefore the search (as written) does not admit implementation on finite-memory and finite-precision computers. In contrast, the representation of
afforded by a representer theorem reduces the original (infinite-dimensional) minimization problem to a search for the optimal
-dimensional vector of coefficients
;
can then be obtained by applying any standard function minimization algorithm. Consequently, representer theorems provide the theoretical basis for the reduction of the general machine learning problem to algorithms that can actually be implemented on computers in practice.
The following provides an example of how to solve for the minimizer whose existence is guaranteed by the representer theorem. This method works for any positive definite kernel
, and allows us to transform a complicated (possibly infinite dimensional) optimization problem into a simple linear system that can be solved numerically.
Assume that we are using a least squares error function
and a regularization function
for some
. By the representer theorem, the minimizer
![{\displaystyle f^{*}={\underset {f\in {\mathcal {H}}}{\operatorname {argmin} }}{\Big \{}E[(x_{1},y_{1},f(x_{1})),\dots ,(x_{n},y_{n},f(x_{n}))]+g(\|f\|_{\mathcal {H}}){\Big \}}={\underset {f\in {\mathcal {H}}}{\operatorname {argmin} }}\left\{\sum _{i=1}^{n}(y_{i}-f(x_{i}))^{2}+\lambda \|f\|_{\mathcal {H}}^{2}\right\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0dec0b274ce39a169bcf01c4e1da6fcb12513568)
has the form
![{\displaystyle f^{*}(x)=\sum _{i=1}^{n}\alpha _{i}^{*}k(x,x_{i})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ad2cc21cdfa213568f2f71a266f7457b17e16fb5)
for some
. Noting that
![{\displaystyle \|f\|_{\mathcal {H}}^{2}={\Big \langle }\sum _{i=1}^{n}\alpha _{i}^{*}k(\cdot ,x_{i}),\sum _{j=1}^{n}\alpha _{j}^{*}k(\cdot ,x_{j}){\Big \rangle }_{\mathcal {H}}=\sum _{i=1}^{n}\sum _{j=1}^{n}\alpha _{i}^{*}\alpha _{j}^{*}{\big \langle }k(\cdot ,x_{i}),k(\cdot ,x_{j}){\big \rangle }_{\mathcal {H}}=\sum _{i=1}^{n}\sum _{j=1}^{n}\alpha _{i}^{*}\alpha _{j}^{*}k(x_{i},x_{j}),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2c1b75d75cd0677fb942321a06054e933b6132f4)
we see that
has the form
![{\displaystyle \alpha ^{*}={\underset {\alpha \in \mathbb {R} ^{n}}{\operatorname {argmin} }}\left\{\sum _{i=1}^{n}\left(y_{i}-\sum _{j=1}^{n}\alpha _{j}k(x_{i},x_{j})\right)^{2}+\lambda \|f\|_{\mathcal {H}}^{2}\right\}={\underset {\alpha \in \mathbb {R} ^{n}}{\operatorname {argmin} }}\left\{\|y-A\alpha \|^{2}+\lambda \alpha ^{\intercal }A\alpha \right\}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b65ba4db334050e5a6fc3c733191e1d474414202)
where
and
. This can be factored out and simplified to
![{\displaystyle \alpha ^{*}={\underset {\alpha \in \mathbb {R} ^{n}}{\operatorname {argmin} }}\left\{\alpha ^{\intercal }(A^{\intercal }A+\lambda A)\alpha -2\alpha ^{\intercal }Ay\right\}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2eac29e9bcaf7c98412942051c26231305c3f5d8)
Since
is positive definite, there is indeed a single global minimum for this expression. Let
and note that
is convex. Then
, the global minimum, can be solved by setting
. Recalling that all positive definite matrices are invertible, we see that
![{\displaystyle \nabla _{\alpha }F=2(A^{\intercal }A+\lambda A)\alpha ^{*}-2Ay=0\Longrightarrow \alpha ^{*}=(A^{\intercal }A+\lambda A)^{-1}Ay,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/40d82d205e33a8d861a5853cbdc5731147259f42)
so the minimizer may be found via a linear solve.
See also
- Mercer's theorem
- Kernel methods
References
- ^ Schölkopf, Bernhard; Herbrich, Ralf; Smola, Alex J. (2001). "A Generalized Representer Theorem". In Helmbold, David; Williamson, Bob (eds.). Computational Learning Theory. Lecture Notes in Computer Science. Vol. 2111. Berlin, Heidelberg: Springer. pp. 416–426. doi:10.1007/3-540-44581-1_27. ISBN 978-3-540-44581-4.
- Argyriou, Andreas; Micchelli, Charles A.; Pontil, Massimiliano (2009). "When Is There a Representer Theorem? Vector Versus Matrix Regularizers". Journal of Machine Learning Research. 10 (Dec): 2507–2529.
- Cucker, Felipe; Smale, Steve (2002). "On the Mathematical Foundations of Learning". Bulletin of the American Mathematical Society. 39 (1): 1–49. doi:10.1090/S0273-0979-01-00923-5. MR 1864085.
- Kimeldorf, George S.; Wahba, Grace (1970). "A correspondence between Bayesian estimation on stochastic processes and smoothing by splines". The Annals of Mathematical Statistics. 41 (2): 495–502. doi:10.1214/aoms/1177697089.
- Schölkopf, Bernhard; Herbrich, Ralf; Smola, Alex J. (2001). "A Generalized Representer Theorem". Computational Learning Theory. Lecture Notes in Computer Science. Vol. 2111. pp. 416–426. CiteSeerX 10.1.1.42.8617. doi:10.1007/3-540-44581-1_27. ISBN 978-3-540-42343-0.