Efficient structured $$\ell 1$$ tracker based on laplacian error distribution | International Journal of Machine Learning and Cybernetics Skip to main content

Advertisement

Log in

Efficient structured \(\ell 1\) tracker based on laplacian error distribution

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Recently, sparse representation has been applied to visual tracking by casting the tracking problem into linear regression problem with sparse coefficient constraint. Under the Gaussian error distribution assumption, the reconstructed loss function is composed of sum-of-squares error term and \(\ell 1\) regularization, which is sensitive to the outliers caused by occlusion and local deformation. In this paper, we propose a robust and efficient \(\ell 1\) tracker based on laplacian error distribution and structured similarity regularization in a particle filter framework. Specifically, we model the error term by laplacian distribution, which has better robustness to the outliers than Gaussian distribution. Meanwhile, in contrast to most existing \(\ell 1\) trackers that handle particles independently, we exploit the dependence relationship between particles and impose the structured similarity regularization on the sparse coefficient set. The customized Inexact Augmented Lagrange Method (IALM) is derived to efficiently solve the optimization problem in batch mode. In addition, we also reveal that the proposed method is related to the robust regression with self-adaptive Huber loss function. Both the computational efficiency and tracking accuracy are enhanced by this novel cost function and optimization strategy. Qualitative and quantitative evaluations on the largest open benchmark video sequences show that our approach outperforms most state-of-the-art trackers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. The indicator function is used to replace the positive constraint on the representation coefficients and convert the optimization problem into the equivalent constraint minimization problem.

References

  1. Adam A, Rivlin E, Shimshoni I (2006) Robust fragments-based tracking using the integral histogram. In: Computer Vision and Pattern Recognition, IEEE Computer Society Conference, vol. 1, pp 798–805

  2. Avidan S (2004) Support vector tracking. Patt Anal Mach Int IEEE Trans 26(8):1064–1072

    Article  Google Scholar 

  3. Babenko B, Yang MH, Belongie S (2011) Robust object tracking with online multiple instance learning. Patt Anal Mach Int IEEE Trans 33(8):1619–1632

    Article  Google Scholar 

  4. Bao C, Wu Y, Ling H, Ji H (2012) Real time robust \(l1\) tracker using accelerated proximal gradient approach. In: Computer Vision and Pattern Recognition (CVPR), IEEE Conference, pp 1830–1837

  5. Bhuyan M, Ramaraju VV, Iwahori Y (2013) Hand gesture recognition and animation for local hand motions. Int J Mach Learn Cybern 1–17

  6. Dinh TB, Vo N, Medioni G (2011) Context tracker: exploring supporters and distracters in unconstrained environments. In: Computer Vision and Pattern Recognition (CVPR), IEEE Conference, pp 1177–1184

  7. Doucet A, De Freitas N, Gordon N (2001) Sequential Monte Carlo methods in practice. Springer, New York

  8. Grabner H, Grabner M, Bischof H (2006) Real-time tracking via on-line boosting. In: British Machine Vision Conference, vol. 1, pp 47–56

  9. Hare S, Saffari A, Torr PH (2011) Struck: structured output tracking with kernels. In: Computer Vision (ICCV), IEEE International Conference, pp 263–270

  10. Henriques JF, Caseiro R, Martins P, Batista J (2012) Exploiting the circulant structure of tracking-by-detection with kernels. In: Computer Vision-ECCV, Springer, pp 702–715

  11. Henriques JF, Caseiro R, Martins P, Batista J (2014) High-speed tracking with kernelized correlation filters. arXiv preprint arXiv:1404.7584

  12. Jia X, Lu H, Yang MH (2012) Visual tracking via adaptive structural local sparse appearance model. In: Computer Vision and Pattern Recognition (CVPR), IEEE Conference, pp 1822–1829

  13. Kalal Z, Matas J, Mikolajczyk K (2010) Pn learning: bootstrapping binary classifiers by structural constraints. In: Computer Vision and Pattern Recognition (CVPR), IEEE Conference, pp 49–56

  14. Kwon J, Lee KM (2010) Visual tracking decomposition. In: Computer Vision and Pattern Recognition (CVPR), IEEE Conference, pp 1269–1276

  15. Kwon J, Lees KM (2011) Tracking by sampling trackers. In: 2011 International Conference on Computer Vision, pp 1195–1202

  16. Lin Z, Liu R, Zhixun S (2011) Linearized alternating direction method with adaptive penalty for low rank representation. Adv Neural Inform Process Syst

  17. Liu B, Huang J, Yang L, Kulikowsk C (2011) Robust tracking using local sparse appearance model and k-selection. In: Computer Vision and Pattern Recognition (CVPR), IEEE Conference, pp 1313–1320

  18. Mei X, Ling H (2009) Robust visual tracking using \(l1\) minimization. In: Computer Vision, IEEE 12th International Conference, pp 1436–1443

  19. Mei X, Ling H, Wu Y, Blasch E, Bai L (2011) Minimum error bounded efficient \(l1\) tracker with occlusion detection. In: Computer Vision and Pattern Recognition (CVPR), IEEE Conference, pp 1257–1264

  20. Ross DA, Lim J, Lin RS, Yang MH (2008) Incremental learning for robust visual tracking. Int J Comput Vis 77(1–3):125–141

    Article  Google Scholar 

  21. Wang N, Wang J, Yeung DY (2013) Online robust non-negative dictionary learning for visual tracking. In: Computer Vision (ICCV), IEEE International Conference, pp 657–664

  22. Wang N, Yeung DY (2013) Learning a deep compact image representation for visual tracking. In: Advances in Neural Information Processing Systems, pp 809–817

  23. Wright J, Ma Y (2010) Dense error correction via-minimization. Inform Theory IEEE Trans 56(7):3540–3560

    Article  MathSciNet  Google Scholar 

  24. Wu Y, Lim J, Yang MH (2013) Online object tracking: a benchmark. In: Computer Vision and Pattern Recognition (CVPR), IEEE Conference, pp 2411–2418

  25. Yang MH, Zhang L (2014) Fast compressive tracking. IEEE Trans Patt Anal Mach Int 1

  26. Zhang T, Ghanem B, Liu S, Ahuja N (2012) Low-rank sparse learning for robust visual tracking. In: Computer Vision-ECCV, Springer, pp 470–484

  27. Zhang T, Ghanem B, Liu S, Ahuja N (2012) Robust visual tracking via multi-task sparse learning. In: Computer Vision and Pattern Recognition (CVPR), IEEE Conference, pp 2042–2049

  28. Zhang T, Ghanem B, Liu S, Ahuja N (2013) Robust visual tracking via structured multi-task sparse learning. Int J Comput Vis 101(2):367–383

    Article  MathSciNet  Google Scholar 

  29. Zhang T, Liu S, Ahuja N, Yang MH, Ghanem B (2014) Robust visual tracking via consistent low-rank sparse learning. Int J Comput Vis 1–20

  30. Zhong W, Lu H, Yang MH (2012) Robust object tracking via sparsity-based collaborative model. In: Computer Vision and Pattern Recognition (CVPR), IEEE Conference, pp 1838–1845

Download references

Acknowledgments

Project supported by the National Natural Science Foundation of China (Grant No. 61272220 and No. 61401212), Jiangsu postdoctoral research grant program (No. 1301026C) and the natural science foundation of Jiangsu Province (Grant No. BK2012399).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoxing Wu.

Appendices

Appendix 1: Inference details for structure similarity regularization

For notational simplicity, we ignore the symbol \(a\) of the representation coefficient. Under the condition of that \(g (c_i,c_j)=\Vert c_i-c_j\Vert _2\), Eq. 4 can be reformulated as follows:

$$\begin{aligned}&\sum _{i=1}^N\sum _{j=1}^Nw_{ij}\Vert c_i-c_j\Vert _2\nonumber \\ =&\sum _{i=1}^N\sum _{j=1}^Nw_{ij}\left( \Vert c_i\Vert _2+\Vert c_j\Vert _2-2c_i^\mathrm {T}c_j\right) \nonumber \\ =&\sum _{i=1}^N\sum _{j=1}^Nw_{ij}\Vert c_i\Vert _2+\sum _{i=1}^N\sum _{j=1}^Nw_{ij}\Vert c_j\Vert _2-2\sum _{i=1}^N\sum _{j=1}^Nw_{ij}c_i^\mathrm {T}c_j\nonumber \\ =&\sum _{i=1}^NW_{(i,:)}\Vert c_i\Vert _2+\sum _{i=1}^NW_{(:,j)}\Vert c_j\Vert _2-2\sum _{i=1}^N\sum _{j=1}^Nw_{ij}c_i^\mathrm {T}c_j\nonumber \\ =&2\left( \sum _{i=1}^NW_{(i,:)}\Vert c_i\Vert _2-\sum _{i=1}^N\sum _{j=1}^Nw_{ij}c_i^\mathrm {T}c_j\right) \nonumber \\ =&2\mathrm {tr}(C\left[ \begin{array}{llll} W_{(1,:)} &{} &{} &{} \\ &{} W_{(2,:)} &{} &{} \\ &{} &{} \ddots &{} \\ &{} &{} &{} W_{(N,:)} \\ \end{array} \right] C^\mathrm {T}-CWC^\mathrm {T})\nonumber \\&i.e. \quad \sum _{i=1}^N\sum _{j=1}^Nw_{ij}\Vert c_i-c_j\Vert _2=2\mathrm {tr}\left( CLC^\mathrm {T}\right) , \end{aligned}$$
(16)

where the fourth equality of (A.1) holds due to the symmetric properties of the matrix \(W\), that is \(W_{(i,:)}=W_{(:,i)}\).

Appendix 2: The convergence properties of customized IALM algorithm for Eq. 7

To proof Theorems 1, we first give the following two lemmas:

Lemma 1

Given the norm \(\Vert \cdot \Vert\) in Hilbert space and the sub-gradient variable \(y\in \partial \Vert \cdot \Vert\), the corresponding dual norm \(\Vert \cdot \Vert ^*\) is no more than 1. (see [16] for a Proof).

Lemma 2

The sequences \(\{Y_{(\cdot ,k)}\}\) , \(\{Y_{(\cdot ,k)}\}^*\) and \(\{\tilde{Y}_{(\cdot ,k)}\}\) are all bounded, where \(\tilde{Y}_{(G,k)}=Y_{(G,k-1)}+\mu _{k-1}(C_{k-1}-G_{k-1})\), \(\tilde{Y}_{(H,k)}=Y_{(H,k-1)}+\mu _{k-1}(C_{k-1}-H_{k-1})\) and \(\tilde{Y}_{(E,k)}=Y_{(E,k-1)}+\mu _{k-1}(X-DC_k-E_{k-1})\).

Proof

Setting the derivatives of Eq.8 with respect to \(G\) equal to zero, we obtain the following condition: \(\quad 0\in \partial \Vert G_{k+1}^*\Vert _1-Y_{(G,k)}^*-\mu _k(C_{k+1}^*-G_{k+1}^*)\), i.e. \(Y_{(G,k+1)}^*=Y_{(G,k)}^*+\mu _k(C_{k+1}^*-G_{k+1}^*)\in \alpha \partial \Vert G_{(k+1)}^*\Vert _1\). Combing Lemma 1 and the fact that dual norm of \(\Vert \cdot \Vert _1\) is \(\Vert \cdot \Vert _\infty\), we have that \(\Vert Y_{(G,k+1)}^*\Vert _\infty \le \alpha\). Hence, the sequence \(\{Y_{(G,k+1)}^*\}\) is bounded. The other sequences \(\{Y_{(H,k)}^*\}\), \(\{Y_{(E,k)}^*\}\), \(\{Y_{(\cdot ,k)}\}\) and \(\{\tilde{Y}_{(\cdot ,k)}\}\) can be proved to be bounded in the same way. \(\square\)

Proof of Theorem 1

First, we proof that the variables sequences \(\{E_k,C_k,G_k,H_k\}\) have the limit \(\{\hat{E},\hat{C},\hat{G},\hat{H}\}\). Note that \(\tilde{Y}_{(E,k)}=Y_{(E,k-1)}+\mu _{k-1}(X-DC_k-E_{k-1})\) and \(Y_{(E,k)}=Y_{(E,k-1)}+\mu _{k-1}(X-DC_k-E_k)\) (defined in Alg.1),we have that \(\Vert E_k-E_{k-1}\Vert =\mu _{k-1}^{-1}\Vert \tilde{Y}_{(E,k)}-Y_{(E,k)}\Vert\). Due to the boundedness of \(\{\tilde{Y}_{(E,k)}\}\) and \(\{Y_{(E,k)}\}\), we get that \(\Vert E_k-E_{k-1}\Vert =O(\mu _{k-1}^{-1})\). Based on the assumption \(\sum _{k=1}^{+\infty }\mu _k^{-2}\mu _{k+1}<+\infty ,\mu _{k+1}>\mu _k\) (i.e. \(\sum _{k=1}^{+\infty }\mu _k^{-1}<+\infty )\), the sequence \(\{E_k\}\) is a Cauchy sequence, hence it has a limit \(\hat{E}\). By \(Y_{(E,k+1)}=Y_{(E,k)}+\mu _k(X-DC_{k+1}-E_{k+1})\) and the boundedness of \(\{Y_{(E,k)}\}\), we have that \(\lim \limits _{k\rightarrow \infty }(X-DC_k-E_k)=\mu _k^{-1}(Y_{(E,k+1)}-Y_{(E,k)})=0\). We conclude that \(\{C_k\}\) also has the limit \(\hat{C}\). So does with \(\hat{G}\) and \(\hat{H}\) cause that \(\lim \limits _{k\rightarrow \infty }(C_k-G_k)=0\) and \(\lim \limits _{k\rightarrow \infty }(C_k-H_k)=0\). \(\square\)

Now, we proof that \(\{\hat{E},\hat{C},\hat{G},\hat{H}\}\) converge to the optimal solution \(\{G^*,H^*,C^*,E^*\}\) . We denote the optimal value of Fun.7 by \(P^*\). According to the equivalent constraint, we get that \(C^*=G^*=H^*\), \(X=DC^*+E^*\), \(C^*\ge 0\), \(\varphi (C^*)=0\),and \(\varphi (C_{k+1})=0\). We observe that \(\hat{Y}_{(G,k+1)}\in \alpha \partial \Vert G_{k+1}\Vert _1\), \(\hat{Y}_{(H,k+1)}\in \beta \partial (H_{k+1}LH_{k+1}^\mathrm {T})\) and \(\hat{Y}_{(E,k+1)}\in \alpha \partial \Vert E_{k+1}\Vert _1\). By the convexity of norms we have that

$$\begin{aligned}&\Vert E_{k+1}\Vert _1+\alpha \Vert G_{k+1}\Vert _1+\beta \mathrm {tr}(H_{k+1}LH_{k+1}^\mathrm {T})+\varphi (C_{k+1})\nonumber \\&\le \Vert E^*\Vert _1+\alpha \Vert G^*\Vert _1+\beta \mathrm {tr}(H^*LH^{*\mathrm {T}})+\varphi (C^*) -\langle \tilde{Y}_{(G,k+1)},G^*-G_{k+1}\rangle \nonumber \\&\quad - \langle \tilde{Y}_{(H,k+1)},H^*-H_{k+1}\rangle -\langle \tilde{Y}_{(E,k+1)},E^*-E_{k+1}\rangle \nonumber \\&= P^*-\langle \mu _k(C_k-G_k),G^*-G_{k+1}\rangle - \langle \mu _k(C_{k+1}-G_{k+1}), G^*-G_{k+1}\rangle \nonumber \\&-\langle \mu _k(C_k-H_k),H^*-H_{k+1}\rangle \nonumber \\&\quad + \langle \mu _k(C_{k+1}-H_{k+1}),H^*-H_{k+1}\rangle - \langle \mu _k(E_{k+1}-E_k), E^*-E_{k+1}\rangle \nonumber \\&- (\langle Y_{(G,k+1)}, G^*-G_{k+1}\rangle \nonumber \\&\quad + \langle Y_{(H,k+1)}, H^*-H_{k+1}\rangle + \langle Y_{(E,k+1)}, E^*-E_{k+1}\rangle ). \end{aligned}$$
(17)

The last three term of Eqn.17 can be reformulated as follows:

$$\begin{aligned}&\langle Y_{(G,k+1)}, G^*-G_{k+1}\rangle + \langle Y_{(H,k+1)}, H^*-H_{k+1}\rangle + \langle Y_{(E,k+1)}, E^*-E_{k+1}\rangle \nonumber \\&=\langle Y_{(G,k+1)}, C^*-G_{k+1}\rangle + \langle Y_{(H,k+1)}, C^*-H_{k+1}\rangle + \langle Y_{(E,k+1)},\nonumber \\&DC^*-X^*-E_{k+1}\rangle \nonumber \\&=\langle Y_{(G,k+1)}, \mu _k^{-1}(Y_{(G,k+1)}-Y_{(G,k)})\rangle + \langle Y_{(H,k+1)},\mu _k^{-1}(Y_{(H,k+1)}-Y_{(H,k)})\rangle \nonumber \\&\quad + \langle Y_{(E,k+1)}, \mu _k^{-1}(Y_{(E,k+1)}-Y_{(E,k)})\rangle + \langle C_{k+1}-C^*,DY_{(E,k+1)}\nonumber \\&-Y_{(G,k+1)}-Y_{(H,k+1)}\rangle \nonumber \\&=\langle Y_{(G,k+1)}, \mu _k^{-1}(Y_{(G,k+1)}-Y_{(G,k)})\rangle + \langle Y_{(H,k+1)}, \mu _k^{-1}(Y_{(H,k+1)}-Y_{(H,k)})\rangle \nonumber \\&\quad + \langle Y_{(E,k+1)}, \mu _k^{-1}(Y_{(E,k+1)}-Y_{(E,k)})\rangle + \langle C_{k+1}-C^*,\mu _{k+1}[\mu _{k+1}^{-1}D^\mathrm {T}(Y_{(E,k+1)}\nonumber \\&\quad -Y_{(E,k+2)})\nonumber \\&\quad + D^\mathrm {T}(E_{k+1}-E_{k+2})+C_{k+2}-G_{k+2}+C_{k+2}-H_{k+2}] \rangle \end{aligned}$$
(18)

Combing Eqs.17 and 18, we get that

$$\begin{aligned}&\Vert E_{k+1}\Vert _1+\alpha \Vert G_{k+1}\Vert _1+\beta \mathrm {tr}(H_{k+1}LH_{k+1}^\mathrm {T})+\varphi (C_{k+1})\nonumber \\&\le \mathcal {L}^* -\langle \mu _k(C_k-G_k),G^*-G_{k+1}\rangle - \langle \mu _k(C_{k+1}-G_{k+1}), G^*-G_{k+1}\rangle \nonumber \\&\quad -\langle \mu _k(C_k-H_k),H^*-H_{k+1}\rangle + \langle \mu _k(C_{k+1}-H_{k+1}),H^*-H_{k+1}\rangle \nonumber \\&\quad - \langle \mu _k(E_{k+1}-E_k), E^*-E_{k+1}\rangle - (\langle Y_{(G,k+1)}, G^*-G_{k+1}\rangle \nonumber \\&\quad + \langle Y_{(G,k+1)}, \mu _k^{-1}(Y_{(G,k+1)}-Y_{(G,k)})\rangle + \langle Y_{(H,k+1)}, \mu _k^{-1}(Y_{(H,k+1)}-Y_{(H,k)})\rangle \nonumber \\&\quad + \langle Y_{(E,k+1)}, \mu _k^{-1}(Y_{(E,k+1)}-Y_{(E,k)})\rangle \nonumber \\&\quad + \langle C_{k+1}-C^*,\mu _{k+1}[\mu _{k+1}^{-1}D^\mathrm {T}(Y_{(E,k+1)}-Y_{(E,k+2)})\nonumber \\&\quad + D^\mathrm {T}(E_{k+1}-E_{k+2})+C_{k+2}-G_{k+2}+C_{k+2}-H_{k+2}] \rangle \end{aligned}$$
(19)

By the assumption \(\lim _{k\rightarrow +\infty }\mu _k(C_{k+1}-H_{k+1})=0\), \(\lim _{k\rightarrow +\infty }\mu _k(C_{k+1}-G_{k+1})=0\) and \(\lim _{k\rightarrow +\infty }\mu _k(E_{k+1}-E_k)=0\), we get that \(\lim _{k\rightarrow +\infty }\mu _k(C_k-G_k)=0\) . When \(k\rightarrow +\infty\) , the terms except the first one all approaches to zeros due to the boundedness of \(\{Y_{(\cdot ,k)}\}\) and \(\{E^*,C^*,G^*,H^*,\hat{E},\hat{C},\hat{G},\hat{H}\}\). Consequently, \(\lim _{k\rightarrow +\infty }\Vert E_{k+1}\Vert _1+\alpha \Vert G_{k+1}\Vert _1+\beta \mathrm {tr}(H_{k+1}LH_{k+1}^\mathrm {T})+\varphi (C_{k+1})\le P^*\). Hence, \(\{\hat{E},\hat{C},\hat{G},\hat{H}\}\) is the optimal solution of the Fun.7.

Appendix 3: Inference details for Eqs. 13 and 15

We reformulate the form of \(\mathcal {L}(E_{k+1})\) in element-wise:

$$\begin{aligned} \mathcal {L}\left( e_{ij}^{k+1}\right)&=\left\| e_{ij}^{k+1}\right\| +y_{ij}^k\left( r_{ij}^{k+1}-e_{ij}^{k+1}\right) +\frac{\mu _k}{2}\left( r_{ij}^{k+1}-e_{ij}^{k+1}\right) ^2\\&=\left\| e_{ij}^{k+1}\right\| +\frac{\mu _k}{2}\left( r_{ij}^{k+1}-e_{ij}^{k+1}+\frac{y_{ij}^k}{\mu _k}\right) ^2-\frac{\left( y_{ij}^k\right) ^2}{2\mu _k}.\\ \end{aligned}$$
  1. 1.

    If \(|r_{ij}^{k+1}+\frac{y_{ij}^k}{\mu _k}|<\lambda\) , then \(e_{ij}^{k+1}=0\) , we have that

    $$\begin{aligned} \mathcal {L}(e_{ij}^{k+1})=\frac{\mu _k}{2}(r_{ij}^{k+1}+\frac{y_{ij}^k}{\mu _k})^2-\frac{(y_{ij}^k)^2}{2\mu _k}. \end{aligned}$$
  2. 2.

    If \(|r_{ij}^{k+1}+\frac{y_{ij}^k}{\mu _k}|\ge \lambda\) , then \(e_{ij}^{k+1}=sgn(r_{ij}^{k+1}+\frac{y_{ij}^k}{\mu _k})(|r_{ij}^{k+1}+\frac{y_{ij}^k}{\mu _k}|-\frac{1}{\mu _k})\), we have that

    $$\begin{aligned} \mathcal {L}\left( e_{ij}^{k+1}\right)&=||r_{ij}^{k+1}+\frac{y_{ij}^k}{\mu _k}|-\frac{1}{\mu _k}| +\frac{\mu _k}{2}\left( \frac{1}{\mu _k}sgn(r_{ij}^{k+1}+\frac{y_{ij}^k}{\mu _k})\right) -\frac{(y_{ij}^k)^2}{2\mu _k}\\&=\left| r_{ij}^{k+1}+\frac{y_{ij}^k}{\mu _k}\right| -\frac{1}{\mu _k}+\frac{1}{2\mu _k}-\frac{(y_{ij}^k)^2}{2\mu _k}\\&=\mu _k\left( \frac{1}{\mu _k}\left| r_{ij}^{k+1}+\frac{y_{ij}^k}{\mu _k}\right| -\frac{1}{2\mu _k^2}\right) -\frac{\left( y_{ij}^k\right) ^2}{2\mu _k}.\\ \end{aligned}$$

As for Eq. 15, combining that \(y_{ij}^k=y_{ij}^{k-1}+\mu _{k-1}\left( r_{ij}^k-e_{ij}^k\right)\), we obtain

$$\begin{aligned} \mu _k^{-1}y_{ij}^k&=\mu _k^{-1}\left( y_{ij}^{k-1}+\mu _{k-1}\left( r_{ij}^k-e_{ij}^k\right) \right) \\&=\mu _k^{-1}\left( y_{ij}^0+\mu _0\left( r_{ij}^1-e_{ij}^1\right) +\ldots +\mu _{k-2}\left( r_{ij}^{k-1}-e_{ij}^{k-1}\right) +\mu _{k-1}\left( r_{ij}^k-e_{ij}^{k-1}\right) \right) \\&=y_{ij}^0+\rho ^{-k}\left( r_{ij}^1-e_{ij}^1\right) +\ldots +\rho ^{-2}\left( r_{ij}^{k-1}-e_{ij}^{k-1}\right) +\rho ^{-1}\left( r_{ij}^k-e_{ij}^k\right) \\&=\sum _{l=0}^{l=k-1}\rho ^{l-k}\left( r{ij}^{l+1}-e_{ij}^{l+1}\right) .\\ \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, G., Zhao, C., Lu, W. et al. Efficient structured \(\ell 1\) tracker based on laplacian error distribution. Int. J. Mach. Learn. & Cyber. 6, 581–595 (2015). https://doi.org/10.1007/s13042-015-0334-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-015-0334-9

Keywords

Navigation