Abstract
We propose two simple polynomial-time algorithms to find a positive solution to \(Ax=0\). Both algorithms iterate between coordinate descent steps similar to von Neumann’s algorithm, and rescaling steps. In both cases, either the updating step leads to a substantial decrease in the norm, or we can infer that the condition measure is small and rescale in order to improve the geometry. We also show how the algorithms can be extended to find a solution of maximum support for the system \(Ax=0\), \(x\ge 0\). This is an extended abstract. The missing proofs will be provided in the full version.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
It had been suggested by Prof. C. Roos that Chubanov’s algorithm could be further improved to \(O(n^{3.5}L)\), but the paper was subsequently withdrawn due to a gap in the argument.
References
Agmon, S.: The relaxation method for linear inequalities. Can. J. Math. 6, 382–392 (1954)
Basu, A., De Loera, J., Junod, M.: On Chubanov’s method for linear programming. INFORMS J. Comput. 26(2), 336–350 (2014)
Betke, U.: Relaxation, new combinatorial and polynomial algorithms for the linear feasibility problem. Discrete Comput. Geom. 32, 317–338 (2004)
Chubanov, S.: A strongly polynomial algorithm for linear systems having a binary solution. Math. Prog. 134, 533–570 (2012)
Chubanov, S.: A polynomial algorithm for linear optimization which is strongly polynomial under certain conditions on optimal solutions (2015). http://www.optimization-online.org/DB_FILE/2014/12/4710.pdf
Chubanov, S.: A polynomial projection algorithm for linear programming. Math. Prog. 153, 687–713 (2015)
Dantzig, G.B.: An \(\varepsilon \)-precise feasible solution to a linear program with a convexity constraint in \(1/{\varepsilon ^2}\) iterations independent of problem size, Report SOL 92–5, Stanford University (1992)
Dunagan, J., Vempala, S.: A simple polynomial-time rescaling algorithm for solving linear programs. Math. Prog. 114, 101–114 (2006)
Epelman, M., Freund, R.M.: Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system. Math. Prog. 88(3), 451–485 (2000)
Goffin, J.: The relaxation method for solving systems of linear inequalities. Math. Oper. Res. 5, 388–414 (1980)
Motzkin, T., Schoenberg, I.J.: The relaxation method for linear inequalities. Can. J. Math. 6, 393–404 (1954)
Nemirovski, A.: Prox-method with rate of convergence \(o(1/t)\) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229–251 (2004)
Novikoff, A.B.J.: On convergence proofs for perceptrons. In: Proceedings of the Symposium on the Mathematical Theory of Automata XII, pp. 615–622 (1962)
Soheili, N., Peña, J.: A smooth perceptron algorithm. SIAM J. Optim. 22, 728–737 (2012)
Peña, J., Soheili, N.: A deterministic rescaled perceptron algorithm. Math. Prog. 155(1), 497–510 (2016)
Roos, K.: On Chubanov’s method for solving a homogeneous inequality system. Numer. Anal. Optim. 134, 319–338 (2015)
Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386–408 (1958). Cornell Aeronautical Laboratory
Schrijver, A.: Theory of Linear and Integer Programming. Wiley, New York (1986)
Sherman, J., Morrison, W.J.: Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. Ann. Math. Stat. 21, 124–127 (1949)
Végh, L.A., Zambelli, G.: A polynomial projection-type algorithm for linear programming. Oper. Res. Lett. 42, 91–96 (2014)
Yu, A.W., Kılınç-Karzan, F., Carbonell, J.: Saddle points and accelerated perceptron algorithms. In: Proceedings of the 31st International Conference on Machine Learning. Journal of Machine Learning Research 32, 1827–1835 (2014)
Wolfe, P.: Finding the nearest point in a polytope. Math. Prog. 11(1), 128–149 (1976)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Dadush, D., Végh, L.A., Zambelli, G. (2016). Rescaled Coordinate Descent Methods for Linear Programming. In: Louveaux, Q., Skutella, M. (eds) Integer Programming and Combinatorial Optimization. IPCO 2016. Lecture Notes in Computer Science(), vol 9682. Springer, Cham. https://doi.org/10.1007/978-3-319-33461-5_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-33461-5_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-33460-8
Online ISBN: 978-3-319-33461-5
eBook Packages: Computer ScienceComputer Science (R0)