Hardness of Approximation for Knapsack Problems | Theory of Computing Systems Skip to main content
Log in

Hardness of Approximation for Knapsack Problems

  • Published:
Theory of Computing Systems Aims and scope Submit manuscript

Abstract

We show various hardness results for knapsack and related problems; in particular we will show that unless the Exponential-Time Hypothesis is false, subset-sum cannot be approximated any better than with an FPTAS. We also provide new unconditional lower bounds for approximating knapsack in Ketan Mulmuley’s parallel PRAM model. Furthermore, we give a simple new algorithm for approximating knapsack and subset-sum, that can be adapted to work for small space, or in small parallel time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. We may assume that such formula have logarithmic-depth [4, 5].

  2. More precisely, the contents of a register at any given time-step is a polynomial in these parameters, but precisely which polynomial depends on the outcomes of previous conditionals (greater-than-zero instructions).

  3. While the max-flow problem, like any problem in P, reduces to the knapsack problem, it does not seem to be possible to have such a reduction in a way that preserves the bit-length of the numerical parameters (the flow capacities). So the lower bound for max-flow does not imply a lower bound for the knapsack problem.

    Intuitively, there is good reason to believe that such a reduction does not exist. The subset-sum problem is a “purely numerical” problem, in the sense that it has no combinatorial structure whatsoever; hence the combinatorial structure of a max-flow problem (the graph itself) will need to be somehow encoded in the numbers of the subset-sum problem we would wish to reduce to; but this graph can be much larger than the bit-lengths of the edge weights.

  4. And undefined if I′ and I″ are both undefined.

  5. To see this, notice that the maximum of W-many a-bit numbers can be computed by AC circuits of size O(W2a2), and the basic case can be computed by a circuit of size O(b).

  6. This is the problem of scheduling tasks to processors of different speeds, while attempting to minimize execution time. Hence each task t in a set of tasks T has a length l(t), and each processor p in a set of processors P has a speed factor s(p). We wish to find an assignment of the tasks f : TP minimizing completion time: \(\max _{p \in P} \sum _{t : f(t) = p} l(t) s(p)\).

  7. The encoding of the circuit could be changed to make this lookup easier.

  8. Here it is worth pointing out: this is the reason we need to “pad” each block of the weights with additional bits. This bears some resemblance to the superincreasing sequences used in the Merkle-Hellman knapsack cryptosystem.

  9. Here we extend the knapsack problem to the reals. This can easily be done due to it being a homogeneous optimization problem, i.e. \(S(\alpha \bar w, \alpha W) = \alpha S(\bar w, W)\) for any real α. We could avoid this, except that it makes the definitions easier to visualize.

  10. We will also use the notation \(Y_{(i_{1}, i_{2})}^{(0)}, Y_{(i_{1}, i_{2})}^{(1)}\) to denote the items corresponding to the variable \(y_{(i_{1}, i_{2})}\).

  11. By this we mean that each C j , for j = 1, … , 30K, is one of the three items \(G_{j}^{\prime (00)}, G_{j}^{\prime (01)}, G_{j}^{\prime (10)}\) for some j , that appear in the proof of Theorem 5. We group them together because we will treat them in the exact same way.

References

  1. Bach, E., Shallit, J.: Algorithmic Number Theory I: Efficient Algorithms. Cambridge University Press (1996)

  2. Barrington, D.A.: Bounded-width polynomial-size branching programs recognize exactly those languages in NC 1. J. Comput. Syst. Sci. 38(1), 150–164 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  3. Bellman, R.: Bottleneck problems and dynamic programming. Proc. Natl. Acad. Sci. USA 39(9), 947–951 (1953)

    Article  MATH  Google Scholar 

  4. Buss, S. R.: The boolean formula value problem is in ALOGTIME. In: Proceedings of the 19th STOC, pp. 123–131 (1987)

  5. Buss, S.R., Cook, S.A., Gupta, A., Ramachandran, V.: An optimal parallel algorithm for formula evaluation. SIAM J. Comput. 21, 755–780 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  6. Crescenzi, P., Kann, V.: A compendium of NP optimization problems. http://www.nada.kth.se/viggo/problemlist/compendium.html (2005)

  7. Cygan, M., Dell, H., Lokshtanov, D., Marx, D, Nederlof, J, Okamoto, Y, Paturi, R, Saurabh, S, Wahlström, M: On problems as hard as CNF-SAT. In: Proceedings of the 27th CCC, pp. 74–84 (2012)

  8. Håstad, J: Some optimal inapproximability results. J. ACM 48(4), 798–859 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  9. Hochbaum, S.S., Shmoys, D.B.: A polynomial approximation scheme for machine scheduling on uniform processors: Using the dual approach. SIAM J. Comput. 17, 539–551 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  10. Horowitz, E., Sahni, S.: Computing partitions with an application to the subset-sum problem. J. ACM 20(2), 277–292 (1974)

    Article  MathSciNet  Google Scholar 

  11. Impagliazzo, R., Paturi, R.: The complexity of k-SAT. In: Proceedings of the 14th CCC, pp. 237–240 (1999)

  12. Impagliazzo, R., Paturi, R., Zane, F.: Which problems have strongly exponential complexity? J. Comput. Syst. Sci. 63(4), 512–530 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  13. Lewin, M., Livnar, D, Zwick, U: Improved rounding techniques for the MAX 2-SAT and MAX DI-CUT problems. In: Proceedings of the 9th ICPO, pp. 67–82 (2002)

  14. Lokshtanov, D., Nederlof, J.: Saving space by algebraization. In: Proceedings of the 42nd STOC, pp. 321–330 (2010)

  15. Mulmuley, K.: Lower bounds in a parallel model without bit operations. SIAM J. Comput. 28(4), 1460–1509 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  16. Myasnikov, A., Nikolaev, A., Ushakov, A.: Knapsack problems in groups. ArXiv:1302.5671 (2013)

  17. Nederlof, J, Leeuwen, E.J.v., Zwaan, R.v.d.: Reducing a target interval to a few exact queries. In: Proceedings of the 37th MFCS, pp. 718–727 (2012)

  18. Raz, R, Safra, S: A sub-constant error-probability low-degree test, and a sub-constant error-probability PCP characterization of NP. In: Proceedings of the 29th STOC, pp. 475–484 (1997)

  19. Sen, S: The hardness of speeding-up knapsack. Technical report, BRICS (1998)

  20. Stockmeyer, LJ., Meyer, AR.: Word problems requiring exponential time. In: Proceedings of the 5th STOC, pp. 1–9 (1973)

  21. Vazirani, V.V.: Approximation Algorithms. Springer (2004)

Download references

Acknowledgments

We thank Ulle Endriss for asking the question that led to this paper, as well as Karl Bringmann, Radu Curticapean, and Dominik Scheder for pointing out that the sparsification lemma would give the full contradiction with the ETH.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leen Torenvliet.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Buhrman, H., Loff, B. & Torenvliet, L. Hardness of Approximation for Knapsack Problems. Theory Comput Syst 56, 372–393 (2015). https://doi.org/10.1007/s00224-014-9550-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00224-014-9550-z

Keywords

Navigation