**2.6 RPCA via Bayesian framework**

4 Will-be-set-by-IN-TECH

where ||.||∗ is the nuclear norm (which is the *L*<sup>1</sup> norm of singular value). Under these minimal assumptions, the PCP solution perfectly recovers the low-rank and the sparse matrices, provided that the rank of the low-rank matrix and the sparsity matrix are bounded by the

where, *ρ<sup>r</sup>* and *ρ<sup>s</sup>* are positive numerical constants, *m* and *n* are the size of the matrix *A*.

*<sup>λ</sup>* <sup>=</sup> <sup>1</sup>

Results presented show that PCP outperform the RSL in case of varying illuminations and

Becker et al. (2011) used the same idea as Candes et al. (2009) that consists of some matrix *A* which can be broken into two components *A* = *L* + *S*, where *L* is low-rank and *S* is sparse. The inequality constrained version of RPCA uses the same objective function, but instead of

Practically, the *A* matrix is composed from datas generated by camera, consequently values are quantified (rounded) on 8 bits and bounded between 0 and 255. Suppose *<sup>A</sup>*<sup>0</sup> ∈ R*m*×*<sup>n</sup>* is the ideal data composed with real values, it is more exact to perform exact decomposition

Lin et al. (2009) proposed to substitute the constraint equality term by penalty function subject

This algorithm solves a slightly relaxed version of the original equation. The *μ* constant lets balance between exact and inexact recovery. Lin et al. (2009) didn't present result on

*Rank*(*L*) + *λ*||*S*||<sup>0</sup> + *μ*

<sup>2</sup> with *A*<sup>0</sup> = *L* + *S*.

1 2


*<sup>F</sup>* (13)


*<sup>μ</sup>*(log min(*n*, *<sup>m</sup>*))<sup>2</sup> , ||*S*||<sup>0</sup> <sup>≤</sup> *<sup>ρ</sup>smn* (10)


max(*m*, *<sup>n</sup>*) (11)

to fix the minimization with *L*<sup>1</sup> norm that provided an approximate convex problem:

argmin *L*,*S*

For further consideration, lamda is choose as follow:

**2.4 RPCA via templates for first-order conic solvers**

argmin *L*,*S*

The result show improvements for dynamic backgrounds 3.

**2.5 RPCA via inexact augmented Lagrange multiplier**

argmin *L*,*S*

<sup>3</sup> http://www.salleurl.edu/~ftorre/papers/rpca/rpca.zip

the constraints *L* + *S* = *A*, the constraints are:

onto *<sup>A</sup>*0. Thus, we can assert ||*A*<sup>0</sup> <sup>−</sup> *<sup>A</sup>*||<sup>∞</sup> <sup>&</sup>lt; <sup>1</sup>

to a minimization under *L*<sup>2</sup> norm :

background subtraction.

rank(*L*) <sup>≤</sup> *<sup>ρ</sup><sup>r</sup>* max(*n*, *<sup>m</sup>*)

follow inequality:

bootstraping issues.

Ding et al. (2011) proposed a hierarchical Bayesian framework that considered for decomposing a matrix (*A*) into low-rank (*L*), sparse (*S*) and noise matrices (*E*). In addition, the Bayesian framework allows exploitation of additional structure in the matrix . Markov dependency is introduced between consecutive rows in the matrix implicating an appropriate temporal dependency, because moving object are strongly correlated across consecutive frames. A spatial dependency assumption is also added and introduce the same Markov contrain as temporal utilizing the local neightborood. Indeed, it force the sparce outliers component to be spatialy and temporaly connected. Thus the decomposition is made as follows:

$$A = L + S + E = \mathcal{U}(SB\_L)V^\prime + X \diamond B\_S + E \tag{14}$$

Where *L* is the low-rank matrix, *S* is the sparse matrix and *E* is the noise matrix. Then some assumption about components distribution are done:


Note that *L*<sup>1</sup> minimization is done by *l*<sup>0</sup> minimization (number of non-zero values fixed for the sparness mask), afterwards a *l*<sup>2</sup> minimization is performed on non-zero values.

The matrix *A* is assumed noisy, with unknown and possibly non-stationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the low-rank and sparse-outlier contributions: the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the low-rank and sparse components. Ding et al. (2011) applied it to background modelling and the result obtain show more robustness to noisy background, slow changing foreground and bootstrapping issue than the RPCA via convex optimization (Wright et al. (2009)).
