LEO: Learning Efficient Orderings for Multiobjective Binary Decision Diagrams | SpringerLink
Skip to main content

LEO: Learning Efficient Orderings for Multiobjective Binary Decision Diagrams

  • Conference paper
  • First Online:
Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR 2024)

Abstract

Approaches based on Binary decision diagrams (BDDs) have recently achieved state-of-the-art results for some multiobjective integer programming problems. The variable ordering used in constructing BDDs can have a significant impact on their size and on the quality of bounds derived from relaxed or restricted BDDs for single-objective optimization problems. We first showcase a similar impact of variable ordering on the Pareto frontier (PF) enumeration time for the multiobjective knapsack problem, suggesting the need for deriving variable ordering methods that improve the scalability of the multiobjective BDD approach. To that end, we derive a novel parameter configuration space based on variable scoring functions that are linear in a small set of interpretable and easy-to-compute variable features. We show how the configuration space can be efficiently explored using black-box optimization, circumventing the curse of dimensionality (in the number of variables and objectives), and finding good orderings that reduce the PF enumeration time. However, black-box optimization approaches incur a computational overhead that outweighs the reduction in time due to good variable ordering. To alleviate this issue, we propose LEO, a supervised learning approach for finding efficient variable orderings that reduce the enumeration time. Experiments on benchmark sets from the knapsack problem with 3-7 objectives and up to 80 variables show that LEO is \(3{\times }\) and \(1.3{\times }\) faster at PF enumeration than a lexicographic ordering and algorithm configuration, respectively. Our code and instances are available at https://github.com/khalil-research/leo.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 14871
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 9437
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Adelgren, N., Gupte, A.: Branch-and-bound for biobjective mixed-integer linear programming. INFORMS J. Comput. 34(2), 909–933 (2022)

    Article  MathSciNet  Google Scholar 

  2. Aloul, F.A., Markov, I.L., Sakallah, K.A.: Force: a fast and easy-to-implement variable-ordering heuristic. In: Proceedings of the 13th ACM Great Lakes Symposium on VLSI, pp. 116–119 (2003)

    Google Scholar 

  3. Altiparmak, F., Gen, M., Lin, L., Paksoy, T.: A genetic algorithm approach for multi-objective optimization of supply chain networks. Comput. Ind. Eng. 51(1), 196–215 (2006)

    Article  Google Scholar 

  4. Belotti, P., Soylu, B., Wiecek, M.M.: A branch-and-bound algorithm for biobjective mixed-integer programs. Optimization Online, pp. 1–29 (2013)

    Google Scholar 

  5. Bergman, D., Bodur, M., Cardonha, C., Cire, A.A.: Network models for multiobjective discrete optimization. INFORMS J. Comput. (2021)

    Google Scholar 

  6. Bergman, D., Cire, A.A.: Multiobjective optimization by decision diagrams. In: Rueher, M. (ed.) CP 2016. LNCS, vol. 9892, pp. 86–95. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-44953-1_6

    Chapter  Google Scholar 

  7. Bergman, D., Ciré, A.A., van Hoeve, W., Hooker, J.N.: Discrete optimization with decision diagrams. INFORMS J. Comput. 28(1), 47–66 (2016). https://doi.org/10.1287/ijoc.2015.0648

    Article  MathSciNet  Google Scholar 

  8. Bergman, D., Cire, A.A., Van Hoeve, W.J., Hooker, J.: Decision Diagrams for Optimization, vol. 1. Springer, Cham (2016)

    Book  Google Scholar 

  9. Bergman, D., Cire, A.A., van Hoeve, W.-J., Hooker, J.N.: Variable ordering for the application of BDDs to the maximum independent set problem. In: Beldiceanu, N., Jussien, N., Pinson, É. (eds.) CPAIOR 2012. LNCS, vol. 7298, pp. 34–49. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29828-8_3

    Chapter  Google Scholar 

  10. Bökler, F., Ehrgott, M., Morris, C., Mutzel, P.: Output-sensitive complexity of multiobjective combinatorial optimization. J. Multi-Criteria Decis. Anal. 24(1–2), 25–36 (2017)

    Article  Google Scholar 

  11. Boland, N., Charkhgard, H., Savelsbergh, M.: The triangle splitting method for biobjective mixed integer programming. In: Lee, J., Vygen, J. (eds.) IPCO 2014. LNCS, vol. 8494, pp. 162–173. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07557-0_14

    Chapter  Google Scholar 

  12. Boland, N., Charkhgard, H., Savelsbergh, M.: A criterion space search algorithm for biobjective integer programming: the balanced box method. INFORMS J. Comput. 27(4), 735–754 (2015)

    Article  MathSciNet  Google Scholar 

  13. Bollig, B., Löbbing, M., Wegener, I.: On the effect of local changes in the variable ordering of ordered decision diagrams. Inf. Process. Lett. 59(5), 233–239 (1996)

    Article  MathSciNet  Google Scholar 

  14. Bollig, B., Wegener, I.: Improving the variable ordering of OBDDs is NP-complete. IEEE Trans. Comput. 45(9), 993–1002 (1996)

    Article  Google Scholar 

  15. Bryant, R.E.: Symbolic Boolean manipulation with ordered binary-decision diagrams. ACM Comput. Surv. (CSUR) 24(3), 293–318 (1992)

    Article  MathSciNet  Google Scholar 

  16. Butler, K.M., Ross, D.E., Kapur, R., Mercer, M.R.: Heuristics to compute variable orderings for efficient manipulation of ordered binary decision diagrams. In: Proceedings of the 28th ACM/IEEE Design Automation Conference, pp. 417–420 (1991)

    Google Scholar 

  17. Cappart, Q., Chételat, D., Khalil, E.B., Lodi, A., Morris, C., Velickovic, P.: Combinatorial optimization and reasoning with graph neural networks. J. Mach. Learn. Res. 24(130), 1–61 (2023)

    MathSciNet  Google Scholar 

  18. Cappart, Q., Goutierre, E., Bergman, D., Rousseau, L.: Improving optimization bounds using machine learning: decision diagrams meet deep reinforcement learning. In: The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, 27 January–1 February 2019, pp. 1443–1451. AAAI Press (2019). https://doi.org/10.1609/aaai.v33i01.33011443

  19. Carbin, M.: Learning effective BDD variable orders for BDD-based program analysis. Technical report. Citeseer (2006)

    Google Scholar 

  20. de las Casas, P.M., Sedeno-Noda, A., Borndörfer, R.: An improved multiobjective shortest path algorithm. Comput. Oper. Res. 135, 105424 (2021)

    Google Scholar 

  21. Chekuri, C., Fox, K.: UIUC CS 598CSC: approximation algorithms, lecture notes on Knapsack (2009). https://courses.engr.illinois.edu/cs598csc/sp2009/lectures/lecture_4.pdf. Accessed 01 July 2023

  22. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 785–794. ACM, New York (2016). https://doi.org/10.1145/2939672.2939785. ISBN 978-1-4503-4232-2

  23. Chung, P.Y., Hajj, I., Patel, J.H.: Efficient variable ordering heuristics for shared ROBDD. In: 1993 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1690–1693. IEEE (1993)

    Google Scholar 

  24. Drechsler, R., Göckel, N., Becker, B.: Learning heuristics for OBDD minimization by evolutionary algorithms. In: Voigt, H.-M., Ebeling, W., Rechenberg, I., Schwefel, H.-P. (eds.) PPSN 1996. LNCS, vol. 1141, pp. 730–739. Springer, Heidelberg (1996). https://doi.org/10.1007/3-540-61723-X_1036

    Chapter  Google Scholar 

  25. Ebendt, R., Drechsler, R.: Exact BDD minimization for path-related objective functions. In: Reis, R., Osseiran, A., Pfleiderer, H.-J. (eds.) VLSI-SoC 2005. IIFIP, vol. 240, pp. 299–315. Springer, Boston, MA (2007). https://doi.org/10.1007/978-0-387-73661-7_19

    Chapter  Google Scholar 

  26. Ebendt, R., Gunther, W., Drechsler, R.: Combining ordered best-first search with branch and bound for exact BDD minimization. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 24(10), 1515–1529 (2005)

    Article  Google Scholar 

  27. Ehrgott, M.: A discussion of scalarization techniques for multiple objective integer programming. Ann. Oper. Res. 147(1), 343–360 (2006)

    Article  MathSciNet  Google Scholar 

  28. Ehrgott, M., Gandibleux, X., Przybylski, A.: Exact methods for multi-objective combinatorial optimisation. In: Multiple Criteria Decision Analysis: State of the Art Surveys, pp. 817–850 (2016)

    Google Scholar 

  29. Friedman, S.J., Supowit, K.J.: Finding the optimal variable ordering for binary decision diagrams. In: Proceedings of the 24th ACM/IEEE Design Automation Conference, pp. 348–356 (1987)

    Google Scholar 

  30. Grumberg, O., Livne, S., Markovitch, S.: Learning to order BDD variables in verification. J. Artif. Intell. Res. 18, 83–116 (2003)

    Article  MathSciNet  Google Scholar 

  31. Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 133–142 (2002)

    Google Scholar 

  32. Joachims, T.: Training linear SVMs in linear time. In: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 217–226 (2006)

    Google Scholar 

  33. Karahalios, A., van Hoeve, W.J.: Variable ordering for decision diagrams: a portfolio approach. Constraints 27(1–2), 116–133 (2022)

    Article  MathSciNet  Google Scholar 

  34. Kendall, M.G.: A new measure of rank correlation. Biometrika 30(1/2), 81–93 (1938)

    Article  Google Scholar 

  35. Kergosien, Y., Giret, A., Neron, E., Sauvanet, G.: An efficient label-correcting algorithm for the multiobjective shortest path problem. INFORMS J. Comput. 34(1), 76–92 (2022)

    Article  MathSciNet  Google Scholar 

  36. Khalil, E., Le Bodic, P., Song, L., Nemhauser, G., Dilkina, B.: Learning to branch in mixed integer programming. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30 (2016)

    Google Scholar 

  37. Khalil, E.B., Vaezipoor, P., Dilkina, B.: Finding backdoors to integer programs: a Monte Carlo tree search framework. In: Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, 22 February–1 March 2022, pp. 3786–3795. AAAI Press (2022). https://ojs.aaai.org/index.php/AAAI/article/view/20293

  38. Kirlik, G., Sayın, S.: A new algorithm for generating all nondominated solutions of multiobjective discrete optimization problems. Eur. J. Oper. Res. 232(3), 479–488 (2014)

    Article  MathSciNet  Google Scholar 

  39. Lambrinidis, G., Tsantili-Kakoulidou, A.: Multi-objective optimization methods in novel drug design. Expert Opin. Drug Discov. 16(6), 647–658 (2021)

    Article  Google Scholar 

  40. Lentz, A.: Multicriteria shortest paths and related geometric problems. Ph.D. thesis, Bordeaux (2021)

    Google Scholar 

  41. Lindauer, M., et al.: SMAC3: a versatile Bayesian optimization package for hyperparameter optimization (2021)

    Google Scholar 

  42. Liu, S., Yan, X., Jin, Y.: End-to-end pareto set prediction with graph neural networks for multi-objective facility location. In: Emmerich, M., et al. (eds.) EMO 2023. LNCS, vol. 13970, pp. 147–161. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-27250-9_11

    Chapter  Google Scholar 

  43. Lu, Y., Jain, J., Clarke, E., Fujita, M.: Efficient variable ordering using ABDD based sampling. In: Proceedings of the 37th Annual Design Automation Conference, pp. 687–692 (2000)

    Google Scholar 

  44. Ozlen, M., Burton, B.A., MacRae, C.A.: Multi-objective integer programming: an improved recursive algorithm. J. Optim. Theory Appl. 160, 470–482 (2014)

    Article  MathSciNet  Google Scholar 

  45. Parragh, S.N., Tricoire, F.: Branch-and-bound for bi-objective integer programming. INFORMS J. Comput. 31(4), 805–822 (2019)

    Article  MathSciNet  Google Scholar 

  46. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  Google Scholar 

  47. Przybylski, A., Gandibleux, X.: Multi-objective branch and bound. Eur. J. Oper. Res. 260(3), 856–872 (2017)

    Article  MathSciNet  Google Scholar 

  48. Rice, M., Kulhari, S.: A survey of static variable ordering heuristics for efficient BDD/MDD construction. University of California, Technical report, p. 130 (2008)

    Google Scholar 

  49. Rudell, R.: Dynamic variable ordering for ordered binary decision diagrams. In: Proceedings of 1993 International Conference on Computer Aided Design (ICCAD), pp. 42–47. IEEE (1993)

    Google Scholar 

  50. Schede, E., et al.: A survey of methods for automated algorithm configuration. J. Artif. Intell. Res. 75, 425–487 (2022)

    Article  MathSciNet  Google Scholar 

  51. Sierra-Altamiranda, A., Charkhgard, H., Dayarian, I., Eshragh, A., Javadi, S.: Learning to project in multi-objective binary linear programming. arXiv preprint arXiv:1901.10868 (2019)

  52. Song, Z., Chen, X., Luo, X., Wang, M., Dai, G.: Multi-objective optimization of agile satellite orbit design. Adv. Space Res. 62(11), 3053–3064 (2018)

    Article  Google Scholar 

  53. Sourd, F., Spanjaard, O.: A multiobjective branch-and-bound framework: application to the biobjective spanning tree problem. INFORMS J. Comput. 20(3), 472–484 (2008)

    Article  MathSciNet  Google Scholar 

  54. Tangpattanakul, P., Jozefowiez, N., Lopez, P.: Multi-objective optimization for selecting and scheduling observations by agile earth observing satellites. In: Coello, C.A.C., Cutello, V., Deb, K., Forrest, S., Nicosia, G., Pavone, M. (eds.) PPSN 2012. LNCS, vol. 7492, pp. 112–121. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32964-7_12

    Chapter  Google Scholar 

  55. Wegener, I.: Branching Programs and Binary Decision Diagrams: Theory and Applications, vol. 4. SIAM (2000)

    Google Scholar 

  56. Wu, Y., Song, W., Cao, Z., Zhang, J., Gupta, A., Lin, M.: Graph learning assisted multi-objective integer programming. In: Advances in Neural Information Processing Systems, vol. 35, pp. 17774–17787 (2022)

    Google Scholar 

  57. Yu, Y., Zhang, J., Cheng, G., Schell, M., Okunieff, P.: Multi-objective optimization in radiotherapy: applications to stereotactic radiosurgery and prostate brachytherapy. Artif. Intell. Med. 19(1), 39–51 (2000)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rahul Patel .

Editor information

Editors and Affiliations

Appendices

A Feature Engineering

Table 5 lists the features for an MKP. It can be divided into two types: variable features and context features. The role of variable features is to summarize the information associated with a particular variable, whereas the context features contain the instance level information. The variable features can be further partitioned into two main categories: property-based and rank-based. The property-based features are based on properties detailed in Table 1 and rank-based features are derived by sorting variables based on property values. In summary, we have 18 variable features and 19 context features, resulting in a total of 37 features per variable. Note that our learning models are based on GBT. However, we do not use context features in all its variants. Specifically, ML and ML+A do not use them whereas ML+C and ML+AC do. Finally, we would like to highlight that similar features can be derived for the problem classes with more than one constraint as given in [36, 51].

Table 5. Features for an MKP.

B SMAC Setup

Initialization. For both uses of SMAC, we use the MinWt ordering as a warm-start by assigning all the property weights to zero except the weight property, which is set to -1. This reduces the need for extensive random exploration of the configuration space by providing a reasonably good ordering heuristic.

Random Seeds. As SMAC is a randomized optimization algorithm, running it with multiple random seeds increases the odds of finding a good solution. We leverage this idea for SmacI as its outputs will be used as labels for supervised learning and are thus expected to be of very high quality, i.e., we seek instance-specific parameter configurations that yield variable orderings with minimal solution times. We run SmacD with a single seed and use its average run time on the training set as a target to beat for SmacI. Since the latter optimizes at the instance level, one would hope it can do better than the distribution-level configuration of SmacD.

As such, for SmacI, we run SMAC on each instance with 5 seeds for all sizes except (5, 40) and (6, 40), for which we used one seed. We start with one seed, then average the run time of the best-performing seed per instance to that of the average enumeration time of SmacD on the training set, and relaunch SMAC with a new seed until a desired performance gap between SmacI and SmacD is achieved.

Computational Budget. In the SmacD setting, we run SMAC with a 12-hour time limit, whereas in the SmacI case the time limit is set to 20 min per instance except for sizes (3, 60), (4, 50), (5, 40), for which it is set to 5 min per instance. In both settings, SMAC runs on a 4-core machine with a 20GB memory limit. It can be observed that generating the labels can be computationally expensive. This dictates the choice of sizes in the set of instance sizes, \(\mathcal S\). Specifically, we select instance sets with an average running time of at most 100 s (not too hard) and at least 5 s (nor too easy) using the top-down compilation method described in Bergman et al. [5].

SmacI Performance Profile. Having established in Q1 that the search for high-quality variable orderings is justified, we now turn to a key component of Phase 1: the use of SMAC as a black-box optimizer that produces “label” variable orderings for subsequent ML in Phase 2. Figure 3 shows the average run time, on the training instances, of the incumbent property-weight configurations found by SMAC. These should be interpreted as standard optimization convergence curves where we seek small values (vertical axis) as quickly as possible (horizontal axis). Indeed, SMAC performs well, substantially improving over its initial warm-start solution (the MinWt ordering). The flat horizontal lines in the figure show the average run time of the single configuration found by SmacD. The latter is ultimately outperformed by the SmacI configurations on average, as desired.

C Additional Results

1.1 C.1 Impact of Variable Ordering on PF Enumeration Time

The heuristic orderings are constructed using the variable properties. For example, the heuristic ordering called max_avg-value-by-weight sorts the variables in descending order of the ratio of a variable’s average value by its weight.

Table 6 summarizes the results of this experiment. We work with 250 MKP instances of the problem sizes (3, 20), (3, 40), (3, 60), (3, 80), (5, 20), (5, 30), (5, 40), (7, 20), (7, 30), and (7, 40). The values in the table are the ratios of the average run time of a heuristic ordering to that of the 5 random orderings; values smaller than one indicate that a heuristic ordering is faster than a random one, on average. The best heuristic ordering for each size is highlighted.

Fig. 3.
figure 3

SmacI performance w.r.t. wallclock time across different sizes. For a given size, we first of all find the best seed, i.e., the seed which finds an incumbent property-weight having minimum PF enumeration time, for each instance. Then for a given wallclock time, we calculate the mean and standard error of the PF enumeration time across instances for the incumbent property-weight up to that wallclock time. The mean and standard error are plotted as a blue line and blue error band around it, respectively. The horizontal black line represents the average PF enumeration time of SmacD configuration on the validation set.

First, it is clear that some heuristic orderings consistently outperform the random ones across all problem sizes (min_weight, max_min-value), and by a significant margin. In contrast, some heuristic orderings are consistently worse than random. For example, the heuristic ordering max_weight, min_avg-value, and min_min-value consistently underperform when the number of variables is more than 20. Second, the choice of MinWt as a baseline was motivated by the results of this experiment as min_weight wins on most sizes as highlighted in Table 6. Altogether, this experiment validates the hypothesis that VO has an impact on the PF enumeration time and that there exists EVO that can significantly reduce this time.

1.2 C.2 Performance of LEO in Comparison to Other Baselines

Fig. 4.
figure 4

Box plots of time to enumerate the PF for different sizes.

Fig. 5.
figure 5

Performance profile of different methods in terms of the fraction of instances solved w.r.t. time for various sizes.

The detailed version of Table 3, with additional statistics, is provided in Table 7. To complement this further, we present box plots and performance profiles in Fig. 4 and Fig. 5, respectively. Note how the PF enumeration time distribution for the ML method is much more concentrated, with a smaller median and only a few outliers compared to other methods. Note that the instances that had a timeout are omitted from this analysis. This performance improvement leads to a larger fraction of instances being solved in a smaller amount of time as highlighted in Fig. 5.

Table 6. Ratio of the average time taken by a heuristic ordering to that of 5 random orderings across 250 MKP instances across sizes specified. A ratio of less (greater) than 1 indicates that heuristic ordering performs better (worse) than random ordering, on average. The name of a heuristic ordering follows the structure \(\langle \)sort\(\rangle \)_\(\langle \)property-name\(\rangle \), where \(\langle \)sort\(\rangle \) can either be min or max and \(\langle \)property-name\(\rangle \) can be equal to one of the properties defined in Table 1. A value of max (min) for \(\langle \)sort\(\rangle \) would sort the variables in the descending (ascending) order of the property \(\langle \)property-name\(\rangle \).
Table 7. PF enumeration time statistics on the test set containing 100 instances with a time limit of 1800 s. Each row represents aggregate statistics for a given instance “Size” and “Method”. “Count” stands for the number of instances on which the algorithm ran successfully without hitting the time or memory limits. “GMean”, “Mean”, “Min”, “Max” and “Std” denotes the geometric mean, arithmetic mean, minimum, maximum and standard deviation of the enumeration time computed across “count” many instances. We use a shift of 5 to compute the “GMean”.

1.3 C.3 BDD Topology Analysis

The mean cumulative number of solutions generated at a given level are given in Fig. 6 and the BDD topology analysis for all sizes is detailed in Table 8. It is evident from both of these that ML-based VO leads to fewer intermediate solutions, which results in smaller number of Checks and improved PF enumeration time.

Fig. 6.
figure 6

Cumulative number of intermediate solutions generated at a particular layer across all sizes.

Fig. 7.
figure 7

Feature importance plot for ML+A model. Refer to the caption of Fig. 2 for an explanation of the feature naming conventions.

Table 8. Analyzing the impact of BDD topology on the PF enumeration time on the test set. For a given “Size” and “Method”, the metric value indicates the percentage of mean performance by that “Method” compared to the Lex method. The “Nodes” (“Width”) represents the percentage of the mean of the number of nodes (mean width) of the BDDs generated by “Method” to that of Lex. “Degree” denotes the percentage of the mean ratio of the in-degree of the top 50 nodes with the highest in-degree in BDDs generated by “Method” to that of Lex. Similarly, “Checks” and “Time” indicate the percentage of the average Pareto-dominance checks and the average geometric mean of the PF enumeration time, respectively. For all metrics, values less than 100 are preferred. For example, a value of \(\delta \) for “Nodes” indicates that the mean number of nodes for “Method” is \(\delta /100 {\times }\) the mean number of nodes of “Lex”.

1.4 C.4 Feature Importance Plots for Size-Independent ML Models

Figure 7 and Fig. 8 shows the feature importance plots for the ML+A and ML+AC models, respectively. The std-value and min-value-by-weight features are ranked highly by both models. The usefulness of the context features is clearly highlighted by their presence in the top 5 features of the ML+AC model.

Fig. 8.
figure 8

Feature importance plot for ML+AC model. Features that start with “c_” are context features; see Table 5.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Patel, R., Khalil, E.B. (2024). LEO: Learning Efficient Orderings for Multiobjective Binary Decision Diagrams. In: Dilkina, B. (eds) Integration of Constraint Programming, Artificial Intelligence, and Operations Research. CPAIOR 2024. Lecture Notes in Computer Science, vol 14743. Springer, Cham. https://doi.org/10.1007/978-3-031-60599-4_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-60599-4_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-60601-4

  • Online ISBN: 978-3-031-60599-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics