Mixed Integer Linear Programming (MILP) is a pillar of mathematical optimization that offers a powerful modeling language for a wide range of applications. During the past decades, enormous algorithmic progress has been made in solving MILPs, and many commercial and academic software packages exist. Nevertheless, the availability of data, both from problem instances and from solvers, and the desire to solve new problems and larger (real-life) instances, trigger the need for continuing algorithmic development. MILP solvers use branch and bound as their main component. In recent years, there has been an explosive development in the use of machine learning algorithms for enhancing all main tasks involved in the branch-and-bound algorithm, such as primal heuristics, branching, cutting planes, node selection and solver configuration decisions. This paper presents a survey of such approaches, addressing the vision of integration of machine learning and mathematical optimization as complementary technologies, and how this integration can benefit MILP solving. In particular, we give detailed attention to machine learning algorithms that automatically optimize some metric of branch-and-bound efficiency. We also address how to represent MILPs in the context of applying learning algorithms, MILP benchmarks and software.
2023
OR Letters
A study of lattice reformulations for integer programming
Karen Aardal, Lara Scavuzzo, and Laurence A. Wolsey
Branch-and-bound for integer optimization typically uses single-variable disjunctions. Enumerative methods for integer optimization with theoretical guarantees use a non-binary search tree with general disjunctions based on lattice structure. These disjunctions are expensive to compute and challenging to implement. Here we compare two lattice reformulations that can be used to heuristically obtain general disjunctions in the original space, we develop a new lattice-based variant, and compare the derived disjunctions computationally with those produced by the algorithm of Lovász and Scarf.
Optimal entanglement distribution policies in homogeneous repeater chains with cutoffs
Álvaro G Iñesta, Gayane Vardoyan, Lara Scavuzzo, and
1 more author
We study the limits of bipartite entanglement distribution using a chain of quantum repeaters that have quantum memories. To generate end-to-end entanglement, each node can attempt the generation of an entangled link with a neighbor, or perform an entanglement swapping measurement. A maximum storage time, known as cutoff, is enforced on the memories to ensure high-quality entanglement. Nodes follow a policy that determines when to perform each operation. Global-knowledge policies take into account all the information about the entanglement already produced. Here, we find global-knowledge policies that minimize the expected time to produce end-to-end entanglement. Our methods are based on Markov decision processes and value and policy iteration. We compare optimal policies to a policy in which nodes only use local information. We find that the advantage in expected delivery time provided by an optimal global-knowledge policy increases with increasing number of nodes and decreasing probability of successful swapping.
2022
NeurIPS
Learning to branch with Tree MDPs
Lara Scavuzzo, Feng Chen, Didier Chételat, and
4 more authors
Advances in Neural Information Processing Systems, 2022
State-of-the-art Mixed Integer Linear Programming (MILP) solvers combine systematic tree search with a plethora of hard-coded heuristics, such as branching rules. While approaches to learn branching strategies have received increasing attention and have shown very promising results, most of the literature focuses on learning fast approximations of the strong branching rule. Instead, we propose to learn branching rules from scratch with Reinforcement Learning (RL). We revisit the work of Etheve et al. (2020) and propose a generalization of Markov Decisions Processes (MDP), which we call tree MDP, that provides a more suitable formulation of the branching problem. We derive a policy gradient theorem for tree MDPs that exhibits a better credit assignment compared to its temporal counterpart. We demonstrate through computational experiments that this new framework is suitable to tackle the learning-to-branch problem in MILP, and improves the learning convergence.
The machine learning for combinatorial optimization competition (ml4co): Results and insights
Maxime Gasse, Simon Bowly, Quentin Cappart, and
19 more authors
In NeurIPS 2021 Competitions and Demonstrations Track, 2022
Combinatorial optimization is a well-established area in operations research and computer science.
Until recently, its methods have focused on solving problem instances in isolation, ignoring that
they often stem from related data distributions in practice. However, recent years have seen a
surge of interest in using machine learning as a new approach for solving combinatorial problems,
either directly as solvers or by enhancing exact solvers. Based on this context, the ML4CO aims
at improving state-of-the-art combinatorial optimization solvers by replacing key heuristic
components. The competition featured three challenging tasks: finding the best feasible solution,
producing the tightest optimality certificate, and giving an appropriate solver configuration.
Three realistic datasets were considered: balanced item placement, workload apportionment, and
maritime inventory routing. This last dataset was kept anonymous for the contestants.
2020
Ecole: A gym-like library for machine learning in combinatorial optimization solvers
Antoine Prouvost, Justin Dumouchelle, Lara Scavuzzo, and
3 more authors
We present Ecole, a new library to simplify machine learning research for combinatorial optimization. Ecole exposes several key decision tasks arising in generalpurpose combinatorial optimization solvers as control problems over Markov
decision processes. Its interface mimics the popular OpenAI Gym library and is
both extensible and intuitive to use. We aim at making this library a standardized
platform that will lower the bar of entry and accelerate innovation in the field.
Documentation and code can be found at https://www.ecole.ai.
2018
Electromagnetically induced transparency of on-demand single photons in a hybrid quantum network
Lucas Schweickert, Klaus D Jöns, Mehdi Namazi, and
8 more authors
Long range quantum communication and quantum information processing require the development of light-matter interfaces for distributed quantum networks. Even though photons are ideal candidates for network links to transfer quantum information, the system of choice for the realization of quantum nodes has not been identified yet. Ideally, one strives for a hybrid network architecture, which will consist of different quantum systems, combining the strengths of each system. However, interfacing different quantum systems via photonic channels remains a major challenge because a detailed understanding of the underlying light-matter interaction is missing. Here, we show the coherent manipulation of single photons generated on-demand from a semiconductor quantum dot using a rubidium vapor quantum memory, forming a hybrid quantum network. We demonstrate the engineering of the photons’ temporal wave function using four-level atoms and the creation of a new type of electromagnetic induced transparency for quantum dot photons on resonance with rubidium transitions. Given the short lifetime of our quantum dot transition the observed dynamics cannot be explained in the established steady-state picture. Our results play a pivotal role in understanding quantum light-matter interactions at short time scales. These findings demonstrate a fundamental active node to construct future large-scale hybrid quantum networks.