Rescaled Coordinate Descent Methods for Linear Programming | SpringerLink
Skip to main content

Rescaled Coordinate Descent Methods for Linear Programming

  • Conference paper
  • First Online:
Integer Programming and Combinatorial Optimization (IPCO 2016)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9682))

  • 1459 Accesses

Abstract

We propose two simple polynomial-time algorithms to find a positive solution to \(Ax=0\). Both algorithms iterate between coordinate descent steps similar to von Neumann’s algorithm, and rescaling steps. In both cases, either the updating step leads to a substantial decrease in the norm, or we can infer that the condition measure is small and rescale in order to improve the geometry. We also show how the algorithms can be extended to find a solution of maximum support for the system \(Ax=0\), \(x\ge 0\). This is an extended abstract. The missing proofs will be provided in the full version.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    It had been suggested by Prof. C. Roos that Chubanov’s algorithm could be further improved to \(O(n^{3.5}L)\), but the paper was subsequently withdrawn due to a gap in the argument.

References

  1. Agmon, S.: The relaxation method for linear inequalities. Can. J. Math. 6, 382–392 (1954)

    Article  MathSciNet  MATH  Google Scholar 

  2. Basu, A., De Loera, J., Junod, M.: On Chubanov’s method for linear programming. INFORMS J. Comput. 26(2), 336–350 (2014)

    Article  MathSciNet  Google Scholar 

  3. Betke, U.: Relaxation, new combinatorial and polynomial algorithms for the linear feasibility problem. Discrete Comput. Geom. 32, 317–338 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  4. Chubanov, S.: A strongly polynomial algorithm for linear systems having a binary solution. Math. Prog. 134, 533–570 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  5. Chubanov, S.: A polynomial algorithm for linear optimization which is strongly polynomial under certain conditions on optimal solutions (2015). http://www.optimization-online.org/DB_FILE/2014/12/4710.pdf

  6. Chubanov, S.: A polynomial projection algorithm for linear programming. Math. Prog. 153, 687–713 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Dantzig, G.B.: An \(\varepsilon \)-precise feasible solution to a linear program with a convexity constraint in \(1/{\varepsilon ^2}\) iterations independent of problem size, Report SOL 92–5, Stanford University (1992)

    Google Scholar 

  8. Dunagan, J., Vempala, S.: A simple polynomial-time rescaling algorithm for solving linear programs. Math. Prog. 114, 101–114 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. Epelman, M., Freund, R.M.: Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system. Math. Prog. 88(3), 451–485 (2000)

    MathSciNet  MATH  Google Scholar 

  10. Goffin, J.: The relaxation method for solving systems of linear inequalities. Math. Oper. Res. 5, 388–414 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  11. Motzkin, T., Schoenberg, I.J.: The relaxation method for linear inequalities. Can. J. Math. 6, 393–404 (1954)

    Article  MathSciNet  MATH  Google Scholar 

  12. Nemirovski, A.: Prox-method with rate of convergence \(o(1/t)\) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229–251 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  13. Novikoff, A.B.J.: On convergence proofs for perceptrons. In: Proceedings of the Symposium on the Mathematical Theory of Automata XII, pp. 615–622 (1962)

    Google Scholar 

  14. Soheili, N., Peña, J.: A smooth perceptron algorithm. SIAM J. Optim. 22, 728–737 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  15. Peña, J., Soheili, N.: A deterministic rescaled perceptron algorithm. Math. Prog. 155(1), 497–510 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  16. Roos, K.: On Chubanov’s method for solving a homogeneous inequality system. Numer. Anal. Optim. 134, 319–338 (2015)

    Article  MATH  Google Scholar 

  17. Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386–408 (1958). Cornell Aeronautical Laboratory

    Article  MathSciNet  Google Scholar 

  18. Schrijver, A.: Theory of Linear and Integer Programming. Wiley, New York (1986)

    MATH  Google Scholar 

  19. Sherman, J., Morrison, W.J.: Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. Ann. Math. Stat. 21, 124–127 (1949)

    Article  MathSciNet  MATH  Google Scholar 

  20. Végh, L.A., Zambelli, G.: A polynomial projection-type algorithm for linear programming. Oper. Res. Lett. 42, 91–96 (2014)

    Article  MathSciNet  Google Scholar 

  21. Yu, A.W., Kılınç-Karzan, F., Carbonell, J.: Saddle points and accelerated perceptron algorithms. In: Proceedings of the 31st International Conference on Machine Learning. Journal of Machine Learning Research 32, 1827–1835 (2014)

    Google Scholar 

  22. Wolfe, P.: Finding the nearest point in a polytope. Math. Prog. 11(1), 128–149 (1976)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to László A. Végh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Dadush, D., Végh, L.A., Zambelli, G. (2016). Rescaled Coordinate Descent Methods for Linear Programming. In: Louveaux, Q., Skutella, M. (eds) Integer Programming and Combinatorial Optimization. IPCO 2016. Lecture Notes in Computer Science(), vol 9682. Springer, Cham. https://doi.org/10.1007/978-3-319-33461-5_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-33461-5_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-33460-8

  • Online ISBN: 978-3-319-33461-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics