Domenico Salvagnin
Department of Information Engineering, Padova
Publications

1.Fast Approaches to Improve the Robustness of a Railway TimetableTransportation Science 43:321335, 2009
Abstract:
The Train Timetabling Problem (TTP) consists in finding a train schedule on a railway network that satisfies some operational constraints and maximizes some profit function which counts for the efficiency of the infrastructure usage. In practical cases, however, the maximization of the objective function is not enough and one calls for a robust solution that is capable of absorbing as much as possible delays/disturbances on the network. In this paper we propose and analyze computationally four different methods to improve the robustness of a given TTP solution for the aperiodic (non cyclic) case. The approaches combine Linear Programming (LP) and adhoc Stochastic Programming/Robust Optimization techniques. We compare computationally the effectiveness and practical applicability of the four techniques under investigation on realworld test cases from the Italian railway company (Trenitalia). The outcome is that two of the proposed techniques are very fast and provide robust solutions of comparable quality with respect to the standard (but very time consuming) Stochastic Programming approach.
Bibtex
@article{FSZ09, Author = {Matteo Fischetti and Domenico Salvagnin and Arrigo Zanette}, Journal = {Transportation Science}, Number = {3}, Pages = {321335}, Title = {Fast approaches to improve the robustness of a railway timetable}, Volume = {43}, Year = {2009} }

2.Just MIP it!V. Maniezzo, T. Stuetzle, S. Voss, Eds., MATHEURISTICS: Hybridizing metaheuristics and mathematical programming, Annals of Information Systems 10:3970, 2009
Abstract:
Modern MixedInteger Programming (MIP) solvers exploit a rich arsenal of tools to attack hard problems. It is widely accepted by the OR community that the solution of very hard MIPs can take advantage from the solution of a series of timeconsuming auxiliary Linear Programs (LPs) intended to enhance the performance of the overall MIP solver. E.g., auxiliary LPs may be solved to generate powerful disjunctive cuts, or to implement a strong branching policy. Also well established is the fact that finding goodquality heuristic MIP solutions often requires a computing time that is just comparable to that needed to solve the LP relaxations. So, it makes sense to think of a new generation of MIP solvers where auxiliary MIPs (as opposed to LPs) are heuristically solved on the fly, with the aim of bringing the MIP technology under the chest of the MIP solver itself. This leads to the idea of “translating into a MIP model” (MIPping) some crucial decisions to be taken within a MIP algorithm (How to cut? How to improve the incumbent solution? Is the current node dominated?). In this paper we survey a number of successful applications of the above approach.
Bibtex
@incollection{JMIPit, Author = {Matteo Fischetti and Andrea Lodi and Domenico Salvagnin}, Booktitle = {Matheuristics}, Editor = {V. Maniezzo and T. Stuetzle and S. Voss}, Pages = {3970}, Publisher = {Springer US}, Series = {Annals of Information Systems}, Title = {Just MIP it!}, Volume = {10}, Year = {2010} }

3.Feasibility Pump 2.0Mathematical Programming Computation 1:201222, 2009
Abstract:
Finding a feasible solution of a given MixedInteger Programming (MIP) model is a very important NPcomplete problem that can be extremely hard in practice. Feasibility Pump (FP) is a heuristic scheme for finding a feasible solution to general MIPs that can be viewed as a clever way to round a sequence of fractional solutions of the LP relaxation, until a feasible one is eventually found. In this paper we study the effect of replacing the original rounding function (which is fast and simple, but somehow blind) with more clever rounding heuristics. In particular, we investigate the use of a divinglike procedure based on rounding and constraint propagation — a basic tool in Constraint Programming. Extensive computational results on binary and general integer MIPs from the literature show that the new approach produces a substantial improvement of the FP success rate, without slowingdown the method and with a significantly better quality of the feasible solutions found.
Bibtex
@article{FP2, Author = {Matteo Fischetti and Domenico Salvagnin}, Journal = {Mathematical Programming Computation}, Number = {23}, Pages = {201222}, Title = {Feasibility Pump 2.0}, Volume = {1}, Year = {2009} }

4.Pruning MovesINFORMS Journal on Computing 22:108119, 2010
Abstract:
The concept of dominance among nodes of a branchandbound tree, although known for a long time, is typically not exploited by generalpurpose mixedinteger linear programming (MILP) codes. The starting point of our work was the generalpurpose dominance procedure proposed in the 1980s by Fischetti and Toth, where the dominance test at a given node of the branchandbound tree consists of the (possibly heuristic) solution of a restricted MILP only involving the fixed variables. Both theoretical and practical issues concerning this procedure are analyzed, and important improvements are proposed. In particular, we use the dominance test not only to fathom the current node of the tree, but also to derive variable configurations called "nogoods" and, more generally, "improving moves." These latter configurations, which we rename “pruning moves” so as to stress their use in a nodefathoming context, are used during the enumeration to fathom large sets of dominated solutions in a computationally effective way. Computational results on a testbed of MILP instances whose structure is amenable to dominance are reported, showing that the proposed method can lead to a considerable speedup when embedded in a commercial MILP solver.
Bibtex
@article{PMoves, Author = {Matteo Fischetti and Domenico Salvagnin}, Journal = {INFORMS Journal on Computing}, Number = {1}, Pages = {108119}, Title = {Pruning Moves}, Volume = {22}, Year = {2010} }

5.A note on the selection of Benders’ cutsMathematical Programming B 124:175182, 2010
Abstract:
A new cut selection criterion for Benders’ cuts is proposed and computationally analyzed. The results show that the new criterion is more robust—and often considerably faster—than the standard ones.
Bibtex
@article{Benders, Author = {Matteo Fischetti and Domenico Salvagnin and Arrigo Zanette}, Journal = {Mathematical Programming B}, Pages = {175182}, Title = {A note on the selection of {B}enders' cuts}, Volume = {124}, Year = {2010} }

6.Finding the Next Solution in Constraint and Preferencebased Knowledge Representation FormalismsKR 2010 Proceedings, 425433
Abstract:
In constraint or preference reasoning, a typical task is to compute a solution, or an optimal solution. However, when one has already a solution, it may be important to produce the next solution following the given one in a linearization of the solution ordering where more preferred solutions are ordered first. In this paper, we study the computational complexity of finding the next solution in some common preferencebased representation formalisms. We show that this problem is hard in general CSPs, but it can be easy in treeshaped CSPs and treeshaped fuzzy CSPs. However, it is difficult in weighted CSPs, even if we restrict the shape of the constraint graph. We also consider CPnets, showing that the problem is easy in acyclic CPnets, as well as in constrained acyclic CPnets where the (soft) constraints are treeshaped and topologically compatible with the CPnet.
Bibtex
@inproceedings{Next, Author = {Ronen Brafman and Francesca Rossi and Domenico Salvagnin and Kristen Brent Venable and Toby Walsh}, Booktitle = {Principles of Knowledge Representation and Reasoning}, Pages = {425433}, Title = {Finding the Next Solution in Constraint and Preferencebased Knowledge Representation Formalisms}, Year = {2010} }

7.An InOut Approach to Disjunctive OptimizationCPAIOR 2010 Proceedings, 136140
Abstract:
Cutting plane methods are widely used for solving convex optimization problems and are of fundamental importance, e.g., to provide tight bounds for MixedInteger Programs (MIPs). This is obtained by embedding a cutseparation module within a search scheme. The importance of a sound search scheme is well known in the Constraint Programming (CP) community. Unfortunately, the "standard" search scheme typically used for MIP problems, known as the Kelley method, is often quite unsatisfactory because of saturation issues. In this paper we address the socalled LiftandProject closure for 01 MIPs associated with all disjunctive cuts generated from a given set of elementary disjunction. We focus on the search scheme embedding the generated cuts. In particular, we analyze a general metascheme for cutting plane algorithms, called inout search, that was recently proposed by BenAmeur and Neto. Computational results on test instances from the literature are presented, showing that using a more clever meta scheme on top of a blackbox cut generator may lead to a significant improvement.
Bibtex
@inproceedings{Yoyo, Author = {Matteo Fischetti and Domenico Salvagnin}, Booktitle = {CPAIOR}, Pages = {136140}, Title = {An InOut Approach to Disjunctive Optimization}, Year = {2010} }

8.A RelaxandCut Framework for Gomory’s MixedInteger CutsMathematical Programming Computation 3:79102, 2011
Abstract:
Gomory MixedInteger Cuts (GMICs) are widely used in modern branchandcut codes for the solution of MixedInteger Programs. Typically, GMICs are iteratively generated from the optimal basis of the current Linear Programming (LP) relaxation, and immediately added to the LP before the next round of cuts is generated. Unfortunately, this approach is prone to instability. In this paper we analyze a different scheme for the generation of rank1 GMICs read from a basis of the original LP—the one before the addition of any cut. We adopt a relaxandcut approach where the generated GMICs are not added to the current LP, but immediately relaxed in a Lagrangian fashion. Various elaborations of the basic idea are presented, that lead to very fast—yet accurate—variants of the basic scheme. Very encouraging computational results are presented, with a comparison with alternative techniques from the literature also aimed at improving the GMIC quality. We also show how our method can be integrated with other cut generators, and successfully used in a cutandbranch enumerative framework.
Bibtex
@article{RelaxCut, Author = {Matteo Fischetti and Domenico Salvagnin}, Journal = {Mathematical Programming Computation}, Pages = {79102}, Title = {A RelaxandCut Framework for Gomory's MixedInteger Cuts}, Volume = {3}, Year = {2011} }

9.MIPLIB 2010  Mixed Integer Programming Library version 5Mathematical Programming Computation 3:103163, 2011
Abstract:
This paper reports on the fifth version of the Mixed Integer Programming Library. The MIPLIB 2010 is the first MIPLIB release that has been assembled by a large group from academia and from industry, all of whom work in integer programming. There was mutual consent that the concept of the library had to be expanded in order to fulfill the needs of the community. The new version comprises 361 instances sorted into several groups. This includes the main benchmark test set of 87 instances, which are all solvable by today’s codes, and also the challenge test set with 164 instances, many of which are currently unsolved. For the first time, we include scripts to run automated tests in a predefined way. Further, there is a solution checker to test the accuracy of provided solutions using exact arithmetic.
Bibtex
@article{MIPLIB2010, Author = {Thorsten Koch and Tobias Achterberg and Erling Andersen and Oliver Bastert and Timo Berthold and Robert E. Bixby and Emilie Danna and Gerald Gamrath and Ambros M. Gleixner and Stefan Heinz and Andrea Lodi and Hans Mittelmann and Ted Ralphs and Domenico Salvagnin and Daniel E. Steffy and Kati Wolter}, Journal = {Mathematical Programming Computation}, Pages = {103163}, Title = {{MIPLIB} 2010  {M}ixed {I}nteger {P}rogramming {L}ibrary version 5}, Volume = {3}, Year = {2011} }

10.Three ideas for the Quadratic Assignment ProblemOperations Research 60:954964, 2012
Abstract:
We address the exact solution of the famous esc instances of the quadratic assignment problem. These are extremely hard instances that remained unsolved—even allowing for a tremendous computing power—by using all previous techniques from the literature. During this challenging task we found that three ideas were particularly useful, and qualified as a breakthrough for our approach. The present paper is about describing these ideas and their impact in solving esc instances. Our method was able to solve, in a matter of seconds or minutes on a single PC, all easy cases (all esc16* plus esc32e and esc32g). The three previouslyunsolved esc32c, esc32d and esc64a were solved in less than half an hour, in total, on a single PC. We also report the solution in about 5 hours of the previouslyunsolved tai64c. By using a facilityflow splitting procedure, we were finally able to solve to proven optimality, for the first time, both esc32h (in about 2 hours) as well as "the big fish" esc128 (to our great surprise, the solution of the latter required just a few seconds on a single PC).
Bibtex
@article{QAP, Author = {Matteo Fischetti and Michele Monaci and Domenico Salvagnin}, Journal = {Operations Research}, Number = {4}, Pages = {954964}, Title = {Three ideas for the Quadratic Assignment Problem}, Volume = {60}, Year = {2012} }

11.Winner determination in voting trees with incomplete preferences and weighted votesAutonomous Agents and MultiAgent Systems 25:130157, 2012
Abstract:
In multiagent settings where agents have different preferences, preference aggregation can be an important issue. Voting is a general method to aggregate preferences. We consider the use of voting tree rules to aggregate agents’ preferences. In a voting tree, decisions are taken by performing a sequence of pairwise comparisons in a binary tree where each comparison is a majority vote among the agents. Incompleteness in the agents’ preferences is common in many reallife settings due to privacy issues or an ongoing elicitation process. We study how to determine the winners when preferences may be incomplete, not only for voting tree rules (where the tree is assumed to be fixed), but also for the Schwartz rule (in which the winners are the candidates winning for at least one voting tree). In addition, we study how to determine the winners when only balanced trees are allowed. In each setting, we address the complexity of computing necessary (respectively, possible) winners, which are those candidates winning for all completions (respectively, at least one completion) of the incomplete profile. We show that many such winner determination problems are computationally intractable when the votes are weighted. However, in some cases, the exact complexity remains unknown. Since it is generally computationally difficult to find the exact set of winners for voting trees and the Schwartz rule, we propose several heuristics that find in polynomial time a superset of the possible winners and a subset of the necessary winners which are based on the completions of the (incomplete) majority graph built from the incomplete profiles.
Bibtex
@article{Winner, Author = {Jerome Lang and Maria Silvia Pini and Francesca Rossi and Domenico Salvagnin and Kristen Brent Venable and Toby Walsh}, Journal = {Autonomous Agents and MultiAgent Systems}, Pages = {130157}, Title = {Winner determination in voting trees with incomplete preferences and weighted votes}, Volume = {25}, Year = {2012} }

12.A hybrid MIP/CP approach for multiactivity shift schedulingCP Proceedings 633646, 2012
Abstract:
We propose a hybrid MIP/CP approach for solving multiactivity shift scheduling problems, based on regular languages that partially describe the set of feasible shifts. We use an aggregated MIP relaxation to capture the optimization part of the problem and to get rid of symmetry. Whenever the MIP solver generates a integer solution, we use a CP solver to check whether it can be turned into a feasible solution of the original problem. A MIPbased heuristic is also developed. Computational results are reported, showing that the proposed method is a promising alternative compared to the stateoftheart.
Bibtex
@inproceedings{Regular, Author = {Domenico Salvagnin and Toby Walsh}, Booktitle = {CP}, Pages = {633646}, Title = {A hybrid MIP/CP approach for multiactivity shift scheduling}, Year = {2012} }

13.Approximating the split closureINFORMS Journal on Computing 25:808819, 2013
Abstract:
The split closure has been proved in practice to be a very tight approximation of the integer hull formulation of a generic mixedinteger linear program. However, exact separation procedures for optimizing over the split closure have unacceptable computing times in practice, hence many different heuristic strategies have been proposed in the last years. In this paper we present a new overall framework for approximating the split closure, that merges different ideas from the previous approaches. Computational results prove the effectiveness of the proposed procedure compared to the state of the art, showing that a good approximation of the split closure bound can be obtained with very reasonable computing times.
Bibtex
@article{SplitApprox, Author = {Matteo Fischetti and Domenico Salvagnin}, Journal = {INFORMS Journal on Computing}, Number = {4}, Pages = {808819}, Title = {Approximating the split closure}, Volume = {25}, Year = {2013} }

14.Cloud BranchingCPAIOR 2013 Proceedings, 2843
Abstract:
Branchandbound methods for mixedinteger programming (MIP) are traditionally based on solving a linear programming (LP) relaxation and branching on a variable which takes a fractional value in the (single) computed relaxation optimum. In this paper we study branching strategies for mixedinteger programs that exploit the knowledge of multiple alternative optimal solutions (a cloud) of the current LP relaxation. These strategies naturally extend stateoftheart methods like strong branching, pseudocost branching, and their hybrids. We show that by exploiting dual degeneracy, and thus multiple alternative optimal solutions, it is possible to enhance traditional methods. We present preliminary computational results, applying the newly proposed strategy to full strong branching, which is known to be the MIP branching rule leading to the fewest number of search nodes. It turns out that cloud branching can reduce the mean running time by up to 30% on standard test sets.
Bibtex
@inproceedings{Cloud, Author = {Timo Berthold and Domenico Salvagnin}, Booktitle = {CPAIOR}, Pages = {2843}, Title = {Cloud Branching}, Year = {2013} }

15.Orbital Shrinking: a new tool for hybrid MIP/CP methodsCPAIOR 2013 Proceedings, 204215
Abstract:
Orbital shrinking is a newly developed technique in the MIP community to deal with symmetry issues, which is based on aggregation rather than on symmetry breaking. In a recent work, a hybrid MIP/CP scheme based on orbital shrinking was developed for the multiactivity shift scheduling problem, showing significant improvements over previous pure MIP approaches. In the present paper we show that the scheme above can be extended to a general framework for solving arbitrary symmetric MIP instances. This framework naturally provides a new way for devising hybrid MIP/CP decompositions. Finally, we specialize the above framework to the multiple knapsack problem. Computational results show that the resulting method can be orders of magnitude faster than pure MIP approaches on hard symmetric instances.
Bibtex
@inproceedings{Mkp, Author = {Domenico Salvagnin}, Booktitle = {CPAIOR}, Pages = {204215}, Title = {Orbital Shrinking: a new tool for hybrid MIP/CP methods}, Year = {2013} }

16.The strength of multirow modelsMathematical Programming Computation 7:113148, 2015
Abstract:
We develop a computational method for computing valid inequalities for any mixedinteger set PJ. Our implementation takes the form of a separator and is capable of returning only facetdefining inequalities for conv(PJ). The separator is not comparable in speed with the specific cuttingplane generators used in branchandcut solvers, but it is generalpurpose. We can thus use it to compute cuts derived from any reasonably small relaxation PJ of a general mixedinteger problem, even when there exists no specific implementation for computing cuts with PJ. Exploiting this, we evaluate, from a computational perspective, the usefulness of cuts derived from several types of multirow relaxations. In particular, we present results with four different strengthenings of the tworow intersection cut model, and multirow models with up to fifteen rows. We conclude that only fullystrengthened tworow cuts seem to offer a significant advantage over tworow intersection cuts. Our results also indicate that the improvement obtained by going from models with very few rows to models with up to fifteen rows may not be worth the increased computing cost.
Bibtex
@article{Multirow, Author = {Quentin Louveaux and Laurent Poirrier and Domenico Salvagnin}, Journal = {Mathematical Programming Computation}, Pages = {113148}, Title = {The strength of multirow models}, Volume = {7}, Year = {2015} }

17.Improving branchandcut performance by random samplingMathematical Programming Computation 8:113132, 2016
Abstract:
We discuss the variability in the performance of multiple runs of branchandcut mixed integer linear programming solvers, and we concentrate on the one deriving from the use of different optimal bases of the linear programming relaxations. We propose a new algorithm exploiting more than one of those bases and we show that different versions of the algorithm can be used to stabilize and improve the performance of the solver.
Bibtex
@article{Randomness, Author = {Matteo Fischetti and Andrea Lodi and Michele Monaci and Domenico Salvagnin and Andrea Tramontani}, Journal = {Mathematical Programming Computation}, Number = {1}, Pages = {113132}, Title = {Improving branchandcut performance by random sampling}, Volume = {8}, Year = {2016} }

18.Exact and Heuristic Approaches for Directional Sensor ControlIEEE Sensors Journal 15:66336639, 2015
Abstract:
Directional sensors are gaining importance due to applications, including surveillance, detection, and tracking. Such sensors have a limited fieldofview and a discrete set of directions they can be pointed to. The Directional Sensor Control problem (DSCP) consists in assigning a direction of view to each sensor. The location of the targets is known with uncertainty given by a joint apriori Gaussian distribution, while sensor locations are known exactly. In this paper we study exact and heuristic approaches for the DSCP with the goal of maximizing information gain on the location of a given set of immobile target objects. In particular, we propose an exact mixed integer convex programming (MICP) formulation to be solved by a blackbox MICP solver and several metaheuristic approaches based on local search. A computational evaluation shows the very good performance of both methods.
Bibtex
@article{Sensors, Author = {Hans D. Mittelmann and Domenico Salvagnin}, Journal = {IEEE Sensors Journal}, Number = {11}, Pages = {66336639}, Title = {Exact and Heuristic Approaches for Directional Sensor Control}, Volume = {15}, Year = {2015} }

19.On Solving a Hard Quadratic 3Dimensional Assignment ProblemMathematical Programming Computation 7:219234, 2015
Abstract:
We address the solution of a very challenging (and previously unsolved) instance of the quadratic 3dimensional assignment problem, arising in digital wireless communications. The paper describes the techniques developed to solve this instance to optimality, from the choice of an appropriate mixedinteger programming formulation, to cutting planes and symmetry handling. Using these techniques we were able to solve the target instance with moderate computational effort (2.5 million nodes and one week of computations on a standard PC).
Bibtex
@article{Q3AP, Author = {Hans D. Mittelmann and Domenico Salvagnin}, Journal = {Mathematical Programming Computation}, Pages = {219234}, Title = {On Solving a Hard Quadratic 3Dimensional Assignment Problem}, Volume = {7}, Year = {2015} }

20.Detecting and exploiting permutation structures in MIPsCPAIOR 2014 Proceedings, 2944
Abstract:
Many combinatorial optimization problems can be formulated as the search for the best possible permutation of a given set of objects, according to a given objective function. The corresponding MIP formulation is thus typically made of an assignment substructure, plus additional constraints and variables (as needed) to express the objective function. Unfortunately, the permutation structure is generally lost when the model is flattened out as a mixed integer program, and stateoftheart MIP solvers do not take full advantage of it. In the present paper we propose a heuristic procedure to detect permutation problems from their MIP formulation, and show how we can take advantage of this knowledge to speedup the solution process. Computational results on quadratic assignment and single machine scheduling problems show that the technique, when embedded in a stateoftheart MIP solver, can indeed improve performance.
Bibtex
@inproceedings{Assign, Author = {Domenico Salvagnin}, Booktitle = {CPAIOR}, Pages = {2944}, Title = {Detecting and exploiting permutation structures in {MIP}s}, Year = {2014} }

21.Selfsplitting of workload in parallel computationCPAIOR 2014 Proceedings, 394404
Abstract:
Parallel computation requires splitting a job among a set of processing units called workers. The computation is generally performed by a set of one or more master workers that split the workload into chunks and distribute them to a set of slave workers. To guarantee correctness and achieve a desirable balancing of the split (needed for scalability), many schemes introduce a (possibly large) overhead due to communication/synchronization among the involved workers. We propose a simple mechanism to avoid the communication issues of the approach above. In the new paradigm, called SelfSplit, each worker is able to autonomously determine, without any communication with the other workers, the job parts it has to process. The above feature makes the scheme very suited for those applications where communication among workers is time consuming or unreliable. In particular, it allows for a simple yet effective parallelization of divideandconquer algorithms with a short input that produce a very large number of timeconsuming job parts, as it happens, e.g., when an NPhard problem is solved by an enumerative method. Computational results are reported, showing that SelfSplit can achieve an almost linear speedup for hard Constraint Programming applications, even when 64 workers are considered.
Bibtex
@inproceedings{SelfSplit, Author = {Matteo Fischetti and Michele Monaci and Domenico Salvagnin}, Booktitle = {CPAIOR}, Pages = {394404}, Title = {Selfsplitting of workload in parallel computation}, Year = {2014} }

22.Orbital Shrinking: Theory and ApplicationsDiscrete Applied Mathematics 222:10912 2017
Abstract:
Symmetry plays an important role in optimization. The usual approach to cope with symmetry in discrete optimization is to try to eliminate it by introducing artificial symmetrybreaking conditions into the problem, and/or by using an adhoc search strategy. This is the common approach in both the mixedinteger programming (MIP) and constraint programming (CP) communities. In this paper we argue that symmetry is instead a beneficial feature that we should preserve and exploit as much as possible, breaking it only as a last resort. To this end, we outline a new approach, that we call orbital shrinking, where additional integer variables expressing variable sums within each symmetry orbit are introduced and used to ``encapsulate'' model symmetry. This leads to a discrete relaxation of the original problem, whose solution yields a bound on its optimal value. Then, we show that orbital shrinking can be turned into an exact method for solving arbitrary symmetric MIP instances. The proposed method naturally provides a new way for devising hybrid MIP/CP decompositions. Finally, we report computational results on two specific applications of the method, namely the multiactivity shift scheduling and the multiple knapsack problem, showing that the resulting method can be orders of magnitude faster than pure MIP or CP approaches.
Bibtex
@article{OrbShrink, Author = {Matteo Fischetti and Leo Liberti and Domenico Salvagnin and Toby Walsh}, Journal = {Discrete Applied Mathematics}, Pages = {109123}, Title = {Orbital Shrinking: Theory and Applications}, Volume = {222}, Year = {2017} }

23.Selfsplit parallelization for Mixed Integer Linear ProgrammingComputers & Operations Research 93:101112, 2018
Abstract:
SelfSplit is a simple static mechanism to convert a sequential treesearch code into a parallel one. In this paradigm, treesearch is distributed among a set of identical workers, each of which is able to autonomously determinewithout any communication with the other workersthe job parts it has to process. SelfSplit already proved quite effective in parallelizing Constraint Programming solvers. In the present paper we investigate the performance of SelfSplit when applied to a MixedInteger Linear Programming (MILP) solver. Both adhoc and general purpose MILP codes have been considered. Computational results show that SelfSplit, in spite of its simplicity, can achieve good speedups even in the MILP context.
Bibtex
@article{SelfSplit2, Author = {Matteo Fischetti and Michele Monaci and Domenico Salvagnin}, Journal = {Computers & Operations Research}, Pages = {101112}, Title = {Selfsplit parallelization for Mixed Integer Linear Programming}, Volume = {93}, Year = {2018} }

24.MixedInteger Linear Programming Heuristics for the PrePack Optimization ProblemDiscrete Optimization 22:195205, 2016
Abstract:
In this paper we consider a packing problem arising in inventory allocation applications, where the operational cost for packing the bins is comparable, or even higher, than the cost of the bins (and of the items) themselves. This is the case, for example, of warehouses that have to manage a large number of different customers (e.g., stores), each requiring a given set of items. For this problem, we present MixedInteger Linear Programming heuristics based on problem substructures that lead to easytosolve and meaningful subproblems, and exploit them within an overall metaheuristic framework. The new heuristics are evaluated on a standard set of instances, and benchmarked against known heuristics from the literature. Computational experiments show that very good (often proven optimal) solutions can consistently be computed in short computing times.
Bibtex
@article{Prepack, Author = {Matteo Fischetti and Michele Monaci and Domenico Salvagnin}, Journal = {Discrete Optimization}, Pages = {195205}, Title = {MixedInteger Linear Programming Heuristics for the PrePack Optimization Problem}, Volume = {22}, Year = {2016} }

25.Branching on multiaggregated variablesCPAIOR 2015 Proceedings, 141156
Abstract:
In mixedinteger programming, the branching rule is a key component to a fast convergence of the branchandbound algorithm. The most common strategy is to branch on simple disjunctions that split the domain of a single integer variable into two disjoint intervals. Multiaggregation is a presolving step that replaces variables by an affine linear sum of other variables, thereby reducing the problem size. While this simplification typically improves the performance of MIP solvers, it also restricts the degree of freedom in variablebased branching rules. We present a novel branching scheme that tries to overcome the above drawback by considering general disjunctions defined by multiaggregated variables in addition to the standard disjunctions based on single variables. This natural idea results in a hybrid between variable and constraintbased branching rules. Our implementation within the constraint integer programming framework SCIP incorporates this into a full strong branching rule and reduces the number of branchandbound nodes on a general test set of publicly available benchmark instances. For a specific class of problems, we show that the solving time decreases significantly.
Bibtex
@inproceedings{MultAggr, Author = {Gerald Gamrath and Anna Melchiori and Timo Berthold and Ambros Gleixner and Domenico Salvagnin}, Booktitle = {CPAIOR}, Pages = {141156}, Title = {Branching on multiaggregated variables}, Year = {2015} }

26.Thinning out Steiner trees: A nodebased model for uniform edge costsMathematical Programming Computation 9:203229, 2017
Abstract:
The Steiner Tree Problem is a challenging NPhard problem. Many hard instances of this problem are publicly available, that are still unsolved by stateoftheart branchandcut codes. A typical strategy to attack these instances is to enrich the polyhedral description of the problem, and/or to implement more and more sophisticated separation procedures and branching strategies. In this paper we investigate the opposite viewpoint, and try to make the solution method as simple as possible while working on the modeling side. Our working hypothesis is that the extreme hardness of some classes of instances mainly comes from overmodeling, and that some instances can become quite easy to solve when a simpler model is considered. In other words, we aim at “thinning out” the usual models for the sake of getting a more agile framework. In particular, we focus on a model that only involves node variables, which is rather appealing for the “uniform” cases where all edges have the same cost. We show that this model allows one to quickly produce very good (sometimes proven optimal) solutions for notoriously hard instances from the literature. In particular, we report improved solutions for several SteinLib instances, including the (in)famous hypercube ones. Even though we do not claim our approach can work well in all cases, we report surprisingly good results for a number of unsolved instances. In some cases, our approach takes just few seconds to prove optimality for instances never solved (even after days of computation) by the standard methods.
Bibtex
@article{Steiner, Author = {Matteo Fischetti and Markus Leitner and Ivana Ljubic and Martin Luipersbeck and Michele Monaci and Max Resch and Domenico Salvagnin and Markus Sinnl}, Journal = {Mathematical Programming Computation}, Pages = {203229}, Title = {Thinning out Steiner trees: A nodebased model for uniform edge costs}, Volume = {9}, Year = {2017} }

27.On Handling Indicator Constraints in Mixed Integer ProgrammingComputational Optimization and Applications, 122, 2016
Abstract:
Mixed Integer Linear Programming (MILP) is commonly used to model indicator constraints, i.e., constraints that either hold or are relaxed depending on the value of a binary variable. Unfortunately, those models tend to lead to weak continuous relaxations and turn out to be unsolvable in practice, like in the case of Classification problems with Ramp Loss functions that represent an important application in this context. In this paper we show the computational evidence that a relevant class of these Classification instances can be solved far more efficiently if a nonlinear, nonconvex reformulation of the indicator constraints is used instead of the linear one. Inspired by this empirical and surprising observation, we show that aggressive bound tightening is the crucial ingredient for solving this class of instances, and we devise a pair of computationally effective algorithmic approaches that exploit it within MILP. More generally, we argue that aggressive bound tightening is often overlooked in MILP, while it represents a significant building block for enhancing MILP technology when indicator constraints and disjunctive terms are present.
Bibtex
@article{Indicator, Author = {Pietro Belotti and Pierre Bonami and Matteo Fischetti and Andrea Lodi and Michele Monaci and Amaya NogalesG\'{o}mez and Domenico Salvagnin}, Journal = {Computational Optimization and Applications}, Pages = {122}, Title = {On Handling Indicator Constraints in Mixed Integer Programming}, Year = {2016} }

28.Detecting Semantic Groups in MIP modelsCPAIOR 2016 Proceedings, 329341
Abstract:
Current stateoftheart MIP technology lacks a powerful modeling language based on global constraints, a tool which has long been standard in constraint programming. In general, even basic semantic information about variables and constraints is hidden from the underlying solver. For example, in a network design model with unsplittable flows, both routing and arc capacity variables could be binary, and the solver would not be able to distinguish between the two semantically different groups of variables by looking at type alone. If available, such semantic partitioning could be used by different parts of the solver, heuristics in primis, to improve overall performance. In the present paper we will describe several heuristic procedures, all based on the concept of partition refinement, to automatically recover semantic variable (and constraint) groups from a flat MIP model. Computational experiments on a heterogeneous testbed of models, whose original higherlevel partition is known a priori, show that one of the proposed methods is quite effective.
Bibtex
@inproceedings{Detgroup, Author = {Domenico Salvagnin}, Booktitle = {CPAIOR}, Pages = {329341}, Title = {Detecting Semantic Groups in MIP models}, Year = {2016} }

29.Improving the Randomization Step in Feasibility PumpSIAM Journal on Optimization 28:355378, 2018
Abstract:
Feasibility pump (FP) is a successful primal heuristic for mixedinteger linear programs (MILP). The algorithm consists of three main components: rounding fractional solution to a mixedinteger one, projection of infeasible solutions to the LP relaxation, and a randomization step used when the algorithm stalls. While many generalizations and improvements to the original Feasibility Pump have been proposed, they mainly focus on the rounding and projection steps. We start a more indepth study of the randomization step in Feasibility Pump. For that, we propose a new randomization step based on the WalkSAT algorithm for solving SAT instances. First, we provide theoretical analyses that show the potential of this randomization step; to the best of our knowledge, this is the first time any theoretical analysis of runningtime of Feasibility Pump or its variants has been conducted. Moreover, we also conduct computational experiments incorporating the proposed modification into a stateoftheart Feasibility Pump code that reinforce the practical value of the new randomization step.
Bibtex
@article{Randfp, Author = {Santanu S. Dey and Andres Iroume and Marco Molinaro and Domenico Salvagnin}, Journal = {SIAM Journal on Optimization}, Number = {1}, Pages = {355378}, Title = {Improving the Randomization Step in Feasibility Pump}, Volume = {28}, Year = {2018} }

30.Ten years of Feasibility Pump, and countingEURO Journal on Computational Optimization, 2018 (online)
Abstract:
The Feasibility Pump (FP) is probably the best known primal heuristic for mixed integer programming. The original work by Fischetti, Glover, and Lodi, which introduced the heuristic for 01 mixedinteger linear programs, has been succeeded by more than twenty followup publications which improve the performance of the FP and extend it to other problem classes. Year 2015 was the tenth anniversary of the first FP publication. The present paper provides an overview of the diverse Feasibility Pump literature that has been presented over the last decade.
Bibtex
@article{10yearsFP, Author = {Timo Berthold and Andrea Lodi and Domenico Salvagnin}, Journal = {EURO Journal on Computational Optimization}, Title = {Ten years of Feasibility Pump, and counting}, Year = {2018} }

31.Chasing First Queens by Integer ProgrammingCPAIOR 2018 Proceedings, 232244
Abstract:
The nqueens puzzle is a wellknown combinatorial problem that requires to place n queens on an n x n chessboard so that no two queens can attack each other. Since the 19th century, this problem was studied by many mathematicians (including Carl Friedrich Gauss) and, more recently, by Edsger Dijkstra who used it to illustrate a depthfirst backtracking algorithm. While finding a solution to the nqueens puzzle is rather straightforward, the problem of counting the number of such solutions is quite challenging and received some attention in recent years. Very recently, in a private correspondence, Donald E. Knuth pointed us to another very challenging version of the nqueens problem, namely, finding the lexicographicallyfirst (or smallest) feasible solution. Solutions for this type are known in the literature for n ≤ 55, while for some larger chessboards only partial solutions are known. The present paper was motivated by Knuth's question of whether Integer Linear Programming (ILP) can be used to compute solutions for some open instances. We describe alternative ILPbased solution approaches, and show that they are indeed able to compute (sometimes in unexpectedlyshort computing times) many new lexicographically optimal solutions for n ranging from 56 to 115.
Bibtex
@inproceedings{Queens, Author = {Matteo Fischetti and Domenico Salvagnin}, Booktitle = {CPAIOR}, Pages = {232244}, Title = {Chasing First Queens by Integer Programming}, Year = {2018} }

32.Symmetry Breaking Inequalities from the SchreierSims tableCPAIOR 2018 Proceedings, 521529
Abstract:
We propose a way to derive symmetry breaking inequalities for a MIP model from the SchreierSims table of the formulation group. We then show how to consider only the action of the formulation group onto a subset of the variables of interest. Computational results show that this can lead to considerable speedups on some classes of models.
Bibtex
@inproceedings{SchreierSims, Author = {Domenico Salvagnin}, Booktitle = {CPAIOR}, Pages = {521529}, Title = {Symmetry Breaking Inequalities from the SchreierSims table}, Year = {2018} }

33.Array Designer: automated optimized array design for functional nearinfrared spectroscopyNeurophotonics 5:519, 2018
Abstract:
The position of each source and detector optode on the scalp, and their relative separations, determines the sensitivity of each functional nearinfrared spectroscopy (fNIRS) channel to the underlying cortex. As a result, selecting appropriate scalp locations for the available sources and detectors is critical to every fNIRS experiment. At present, it is standard practice for the user to undertake this task manually; to select what they believe are the best locations on the scalp to place their optodes so as to sample a given cortical regionofinterest (ROI). This process is difficult, timeconsuming, and highly subjective. Here, we propose a tool, Array Designer, that is able to automatically design optimized fNIRS arrays given a userdefined ROI and certain features of the available fNIRS device. Critically, the Array Designer methodology is generalizable and will be applicable to almost any subject population or fNIRS device. We describe and validate the algorithmic methodology that underpins Array Designer by running multiple simulations of array design problems in a realistic anatomical model. We believe that Array Designer has the potential to end the need for manual array design, and in doing so save researchers time, improve fNIRS data quality, and promote standardization across the field.
Bibtex
@article{NIRS, Author = {Sabrina Brigadoi and Domenico Salvagnin and Matteo Fischetti and Robert J. Cooper}, Journal = {Neurophotonics}, Pages = {519}, Title = {Array Designer: automated optimized array design for functional nearinfrared spectroscopy}, Volume = {5}, Year = {2018} }

34.Some experiments with submodular function maximization via integer programmingCPAIOR 2019 Proceedings, 488501
Abstract:
Submodular function maximization is a classic problem in optimization, with many real world applications, like sensor coverage, location problems and feature selection, among others. Back in the 80's, Nemhauser and Wolsey proposed an integer programming formulation for the general submodular function maximization. Being the number of constraints in the formulation exponential in the size of the ground set, a constraint generation technique was proposed. Since then, the method was not developed further. Given the renewed interest in recent years in submodular function maximization, the constraint generation method has been used as reference to evaluate both exact and heuristic approaches. However, the outcome of those experiments was that the method is utterly slow in practice, even for small instances. In this paper we propose several algorithmic enhancements to the constraint generation method. Preliminary computational results show that a proper implementation, while still not scalable to big instances, can be significantly faster than the obvious implementation by the book. A comparison with direct mixed integer linear programming formulations on some classes of models that admit one also show that the submodular framework, in its generality, is clearly slower than dedicated formulations, so it should be used only when those approaches are not viable.
Bibtex
@inproceedings{Submod, Author = {Domenico Salvagnin}, Booktitle = {CPAIOR}, Pages = {488501}, Title = {Some experiments with submodular function maximization via integer programming}, Year = {2019} }

35.Lagrangian Decomposition for Optimal Cost PartitioningICAPS 2019 Proceedings, 338347
Abstract:
Optimal cost partitioning of classical planning heuristics has been shown to lead to excellent heuristic values but is often prohibitively expensive to compute. Lagrangian decomposition and Lagrangian relaxation are classical tools in mathematical programming that apply to optimization problems with a special block structure. We analyze the application of Lagrangian decomposition to cost partitioning in the context of operatorcounting heuristics and interpret Lagrangian multipliers as cost functions for the combined heuristics. This allows us to view the computation of an optimal cost partitioning as an iterative process that can be seeded with any cost partitioning and improves over time. For nonnegative cost partitioning, we derive an anytime algorithm to compute an optimal cost partitioning of abstraction heuristics without involving an LP solver. In each iteration, the computation reduces to independent shortest path problems in all abstractions. Finally, we discuss the extension to general cost functions.
Bibtex
@inproceedings{CostPart, Author = {Florian Pommerening and Gabriele R{\"o}ger and Malte Helmert and Hadrien Cambazard and LouisMartin Rousseau and Domenico Salvagnin}, Booktitle = {ICAPS}, Pages = {338347}, Title = {Lagrangian Decomposition for Optimal Cost Partitioning}, Year = {2019} }

36.MIPLIB 2017: DataDriven Compilation of the 6th MixedInteger Programming LibraryMathematical Programming Computation 13:443490, 2021
Abstract:
We report on the selection process leading to the sixth version of the Mixed Integer Programming Library. Selected from an initial pool of over 5,000 instances, the new MIPLIB 2017 collection consists of 1,065 instances. A subset of 240 instances was specially selected for benchmarking solver performance. For the first time, the compilation of these sets was done using a datadriven selection process supported by the solution of a sequence of mixed integer optimization problems, which encoded requirements on diversity and balancedness with respect to instance features and performance data.
Bibtex
@article{MIPLIB2017, author = {Ambros Gleixner and Gregor Hendel and Gerald Gamrath and Tobias Achterberg and Michael Bastubbe and Timo Berthold and Philipp Christophel and Kati Jarck and Thorsten Koch and Jeff Linderoth and Marco L{\"u}bbecke and Hans D. Mittelmann and Derya Ozyurt and Ted Ralphs and Domenico Salvagnin and Yuji Shinano}, journal = {Mathematical Programming Computation}, pages = {443490}, title = {MIPLIB 2017: DataDriven Compilation of the 6th MixedInteger Programming Library}, volume = {13}, year = {2021} }

37.An Exploratory Computational Analysis of Dual Degeneracy in Mixedinteger ProgrammingEURO Journal on Computational Optimization, 121, 2020
Abstract:
Dual degeneracy, i.e., the presence of multiple optimal bases to a linear programming (LP) problem, heavily affects the solution process of mixed integer programming (MIP) solvers. Different optimal bases lead to different cuts being generated, different branching decisions being taken and different solutions being found by primal heuristics. Nevertheless, only a few methods have been published that either avoid or exploit dual degeneracy. The aim of the present paper is to conduct a thorough computational study on the presence of dual degeneracy for the instances of wellknown public MIP instance collections. How many instances are affected by dual degeneracy? How degenerate are the affected models? How does branching affect degeneracy: Does it increase or decrease by fixing variables? Can we identify different types of degenerate MIPs? In this course, we introduce a new measure for dual degeneracy: The variableconstraint ratio of the optimal face. It provides an estimate for the likelihood that a basic variable can be pivoted out of the basis. Furthermore, we study how the socalled cloud intervals — the projections of the optimal face of the LP relaxations onto the individual variables — evolve during tree search and the implications for reducing the set of branching candidates.
Bibtex
@article{DualDegen, Author = {Gerald Gamrath and Timo Berthold and Domenico Salvagnin}, Journal = {EURO Journal on Computational Optimization}, Pages = {121}, Title = {An exploratory computational analysis of dual degeneracy in mixedinteger programming}, Year = {2020} }

38.Implementing Automatic Benders Decomposition in a Modern MIP SolverIPCO 2020 Proceedings, 7890
Abstract:
We describe the automatic Benders decomposition implemented in the commercial solver IBM CPLEX. We propose several improvements to the stateoftheart along two lines: making a numerically robust method able to deal with the general case and improving the efficiency of the method on models amenable to decomposition. For the former, we deal with: unboundedness, failures in generating cuts and scaling of the artificial variable representing the objective. For the latter, we propose a new technique to handle socalled generalized bound constraints and we use different types of normalization conditions in the Cut Generating LPs. We present computational experiments aimed at assessing the importance of the various enhancements. In particular, on our test bed of models amenable to a decomposition, our implementation is approximately 5 times faster than CPLEX default branchandcut. A remarkable result is that, on the same test bed, default branchandcut is faster than a Benders decomposition that doesn't implement our improvements.
Bibtex
@inproceedings{CpxBenders, Author = {Pierre Bonami and Domenico Salvagnin and Andrea Tramontani}, Booktitle = {IPCO 2020 Proceedings}, Editor = {Springer}, Pages = {7890}, Title = {Implementing Automatic Benders Decomposition in a Modern MIP Solver}, Year = {2020} }

39.Incorporating Sparse and Quantized Carbohydrates Suggestions in Model Predictive Control for Artificial Pancreas in Type 1 DiabetesIEEE Transactions on Control Systems Technology, 2022
Abstract:
People with type 1 diabetes (T1D) are constantly faced with the challenge of assuming exogenous insulin to maintain blood glucose (BG) levels in a safe physiological range, so as to avoid (possibly severe) com plications. By automatizing insulin infusion, the socalled Artificial Pancreas (AP) assists patients in this challenge. While insulin can decrease BG, another input inducing glucose increase could further improve glycemic control and reduce the risk of dangerous hypoglycemic events. Here we develop a model predictive control (MPC) algorithm that, in addition to insulin infusion, also provides suggestions of carbohydrates (CHO) as a second, glucose increasing, input to control BG. Since CHO consumption have to be manually actuated, great care is paid in limiting the extra burden that may be caused to patients. By re sorting to a mixed logicaldynamical MPC formulation, CHO intake is designed to be sparse in time and quantized. The algorithm is validated on the UVa/Padova T1D simulator, a wellestablished largescale model of T1D metabolism, approved by FDA. Compared to an insulinonly MPC, the coordinated use of CHO and insulin performed by the new algorithm ensures increased time spent in the safe physiological range in 75% of patients. The improvement is limited for those already effectively controlled by the stateofart strategy but relevant for the others: the 25th percentile of this metric is increased from 74.75% to 79.06% in the population. This is achieved while simultaneously decreasing time spent in hypoglycemia (from 0.5% to 0.12% in median) and with limited manual interventions (2.86 per day in median).
Bibtex
@article{Glucose, author = {Jacopo Pavan and Domenico Salvagnin and Andrea Facchinetti and Giovanni Sparacino and Simone Del Favero}, journal = {IEEE Transactions on Control Systems Technology}, pages = {10}, title = {Incorporating Sparse and Quantized Carbohydrates Suggestions in Model Predictive Control for Artificial Pancreas in Type 1 Diabetes}, year = {2022} }

40.Transferring Information across Restarts in MIPCPAIOR 2022 Proceedings, 2433
Abstract:
Restarting a solver gives us the chance to learn from things that went good or bad in the search until the restart point. The benefits of restarts are often justified with being able to employ different, better strategies and explore different, more promising parts of the search space. In that light, it is an interesting question to evaluate whether carrying over detected structures and collected statistics across a restart benefits the subsequent search, or even counteracts the anticipated diversification from the previous, unsuccessful search. In this paper, we will discuss four different types of global information that can potentially be reused after a restart of a mixedinteger programming (MIP) solver, present technical details of how to carry them through a represolve after a restart, and show how such an information transfer can help to speed up the stateoftheart commercial MIP solver FICO Xpress by 7% on the instances where a restart is performed.
Bibtex
@inproceedings{restartTransfer, author = {Timo Berthold and Gregor Hendel and Domenico Salvagnin}, booktitle = {CPAIOR}, pages = {2433}, title = {Transferring Information across Restarts in MIP}, year = {2022} }

41.Using Multiple Reference Vectors and Objective Scaling in the Feasibility PumpEURO Journal on Computational Optimization, 118, 2023
Abstract:
The Feasibility Pump (FP) is one of the bestknown primal heuristics for mixedinteger programming (MIP): more than 15 papers suggested various modifications of all of its steps. So far, no variant considered in formation across multiple iterations, but all instead maintained the prin ciple to optimize towards a single reference integer point. In this paper, we evaluate the usage of multiple reference vectors in all stages of the FP algorithm. In particular, we use LPfeasible vectors obtained during the main loop to tighten the variable domains before entering the computationally expensive enumeration stage. Moreover, we consider multiple integer reference vectors to explore further optimizing directions and introduce alternative objective scaling terms to balance the contributions of the distance functions and the original MIP objective. Our computational experiments demonstrate that the new method can improve performance on general MIP test sets. In detail, our modifications provide a 29.3% solution quality improvement and 4.0% running time improvement in an embedded setting, needing 16.0% fewer iterations over a large test set of MIP instances. In addition, the method’s success rate increases considerably within the first few iterations. In a standalone setting, we also observe a moderate performance improvement, which makes our version of FP suitable for the two main usecases of the algorithm.
Bibtex
@article{multifp, author = {Gioni Mexi and Timo Berthold and Domenico Salvagnin}, journal = {EURO Journal on Computational Optimization}, pages = {118}, title = {Using Multiple Reference Vectors and Objective Scaling in the Feasibility Pump}, volume = {11}, year = {2023} }

42.A fixpropagaterepair heuristic for Mixed Integer Programming(submitted)
Abstract:
We describe a diving heuristic framework based on constraint propagation for mixed integer linear programs. The proposed approach is an extension of the common fixandpropagate scheme, with the addition of solution repairing after each step. The repair logic is loosely based on the WalkSAT strategy for boolean satisfiability. Different strategies for variable ranking and value selection, as well as other options, yield different diving heuristics. The overall method is relatively inexpensive, as it is basically LPfree: the full linear programming relaxation is solved only at the beginning, and only for the ranking strategies that make use of it, while additional (typically much smaller) LPs are only used to compute values for the continuous variables (if any), once at the bottom of a dive. While individual strategies are not very robust in finding feasible solutions on a heterogenous testbed, a portfolio approach proved quite effective. In particular, it could consistently find feasible solutions in 189 out of 240 instances from the public MIPLIB 2017 benchmark testbed, in a matter of a few seconds of runtime.
Bibtex


43.A Strategy Based on Integer Programming for Determining the Optimal Dosing and Timing of Hypotreatments in Type 1 Diabetes ManagementComputer Methods and Programs in Biomedicine, 250:111, 2024
Abstract:
Background and objectives: One of the major problems related to type 1 diabetes (T1D) management is hypoglycemia, a condition characterized by low blood glucose levels and responsible for reduced quality of life and increased mortality. Fastacting carbohydrates, also known as hypoglycemic treatments (HT), can counteract this event. In the literature, dosage and timing of HT are usually based on heuristic rules. In the present work, we propose an algorithm for mitigating hypoglycemia by suggesting preventive HT consumption, with dosages and timing determined by solving an optimization problem.
Methods: By leveraging integer programming and linear inequality constraints, the algorithm can bind the amount of suggested carbohydrates to standardized quantities (i.e., those available in “offtheshelf” HT) and the minimal distance between consecutive suggestions (to reduce the nuisance for patients).
Results: The proposed method was tested in silico and compared with competitor algorithms using the UVa/Padova T1D simulator. At the cost of a slight increase of HT consumed per day, the proposed algorithm produces the lowest median and interquartile range of the time spent in hypoglycemia, with a statistically significant improvement over most competitor algorithms. Also, the average number of hypoglycemic events per day is reduced to 0 in median.
Conclusions: Thanks to its positive performances and reduced computational burden, the proposed algorithm could be a candidate tool for integration in a DSS aimed at improving T1D management.
Bibtex
@article{Glucose2, author = {Jacopo Pavan and Giulia Noaro and Andrea Facchinetti and Domenico Salvagnin and Giovanni Sparacino and Simone Del Favero}, journal = {Computer Methods and Programs in Biomedicine}, pages = {111}, title = {A Strategy Based on Integer Programming for Determining the Optimal Dosing and Timing of Hypotreatments in Type 1 Diabetes Management}, volume = {250}, year = {2024} }

44.An Improved Compact Formulation for the Assortment Optimization Problem with Small Consideration Sets(submitted)
Abstract:
Bibtex


45.Models and Algorithms for the Time Window Assignment Traveling Salesperson Problem with Stochastic Travel Times(submitted)
Abstract:
Bibtex
