**10. References**


**0**

**11**

<sup>1</sup>*Pakistan* <sup>2</sup>*USA*

**Robust Density Comparison Using**

Many problems in various fields require measuring the similarity between two distributions. Often, the distributions are represented through samples and no closed form exists for the distribution, or it is unknown what the best parametrization is for the distribution . Therefore, the traditional approach of first estimating the probability distribution using the samples, then comparing the distance between the two distributions is not feasible. In this chapter, a method to compute the similarity between two distributions, which is robust to noise and outliers, is presented. The method works directly on the samples without requiring the intermediate step of density estimation, although the approach is closely related to density estimation. The method is based on mapping the distributions into a reproducing kernel Hilbert space, where eigenvalue decomposition is performed. Retention of only the top *M* eigenvectors minimizes

The chapter is organized in two parts. First, we explain the procedure to obtain the robust density comparison method. The relation between the method and kernel principal component analysis (KPCA) is also explained. The method is validated on synthetic examples. In the second part, we apply the method to the problem of visual tracking. In visual tracking, an initial target and target appearance is given, and must be found within future images. The target information is assumed to be characterized by a probability distribution. Thus tracking, in this scenario, is defined to be the problem of finding the distribution within each image of a sequence that most fits the given target distribution. Here, the object is tracked by minimizing the similarity measure between the model distribution and the candidate distribution where

*<sup>i</sup>*=1, *ui* <sup>∈</sup> **<sup>R</sup>***d*, be a set of *<sup>n</sup>* observations. A Mercer kernel is a function <sup>k</sup> : **<sup>R</sup>***<sup>d</sup>* <sup>×</sup>**R***<sup>d</sup>* <sup>→</sup> **<sup>R</sup>**,

**1. Introduction**

the effect of noise on density comparison.

the target position is the optimization variable.

3. The matrix K, with entries *Kij* = k(*ui*, *uj*) is positive definite.

2. k(*ui*, *uj*) = k(*uj*, *ui*). Symmetric

**2. Mercer kernels**

Let {*ui*}*<sup>n</sup>*

which satisfies: 1. k is continuous **Eigenvalue Decomposition**

<sup>1</sup>*National University of Sciences and Technology*

Omar Arif1 and Patricio A. Vela2

<sup>2</sup>*Georgia Institute of Technology*

