Algorithm Selection
Contents
 1 Preprocessing
 2 Debugging
 3 Numeric estimation
 4 SOCP barrier search
 5 Evolutionary search controls
 6 Mixedinteger controls
 6.1 Integer branch and bound
 6.2 Cut generation control
 6.2.1 MaxRootCutPasses
 6.2.2 MaxTreeCutPasses
 6.2.3 GomoryCuts
 6.2.4 MaxGomoryCuts
 6.2.5 GomoryPasses
 6.2.6 KnapsackCuts
 6.2.7 MaxKnapsackCuts
 6.2.8 KnapsackPasses
 6.2.9 ProbingCuts
 6.2.10 OddHoleCuts
 6.2.11 MirCuts, TwoMirCuts
 6.2.12 RedSplitCuts
 6.2.13 SOSCuts
 6.2.14 FlowCoverCuts
 6.2.15 CliqueCuts
 6.2.16 RoundingCuts
 6.2.17 LiftAndCoverCuts
 6.3 Coping with local optima
 7 See Also
Preprocessing
Scaling
When this is True, the Optimizer attempts to rescale decision variables and constraints internally for the simplex algorithm, which usually leads to be reliable results and fewer iterations. A poorly scaled model, in which values of the objective, constraints, or intermediate results differ by several orders of magnitude, can result in numeric instabilities within the Optimizer when scaling is turned off, due to the effects of finite precision computer arithmetic.
Default: False
Allowed range: 0 or 1
Presolve
When this is True, the LP/Quadratic engine performs a presolve step to detect singleton rows and columns, remove fixed variables and redundant constraints, and tighten bounds, prior to applying the simplex method.
Default: True
Allowed range: 0 or 1
Engine: LP/Quadratic
PreProcess
Turns on or off all integer preprocessing (on by default).
Default: 1
Allowed range: 0 or 1
Engine: LP/Quadratic
StrongBranching
This setting applies to integer and mixedinteger problems. When this is on, the Optimizer estimates the impact of branching on each integer variable of the objective function prior to beginning the branch and bound search. It does this by performing a few iterations of the dual simplex method after fixing each variable. This “experiment” provides the search with an estimate of which integer variables are likely to be most effective choices during the branch and bound search. Although the time spent in this estimation process can be moderately expensive, the cost is often regained many times over through a reduction in the number of branchandbound iterations that must be explored to find an optimal integer solution.
Default: 1
Allowed range: 0 or 1
Engine: LP/Quadratic
Debugging
SolveWithout
Means "Solve Without Integer Constraints." When this is True, any integer domain constraints are ignored, and the continuous, and the continuous version of the problem is solved instead. The effect is the same as changing the domain to Continuous while leaving the variable bounds in, but can be more convenient in some cases when debugging.
Default: 0
Allowed range: 0 or 1
IISBounds
Determines whether variable bounds should be included in the infeasibility search conducted by OptFindIIS or OptWriteIIS(). When set to 1, only a subset of the scalar constraints along the .ConstraintVector
index is considered. When set to 0, variable bounds can be eliminated in order to find an IIS with a greater number of constraints. This parameter is only used by OptFindIIS() when the second optional parameter, «newLp», is True. When «newLp» is True, OptFindIIS() returns a new LP object, from which you can use OptInfo() to access the list of constraints and list of variable bounds present in the IIS. When «newLp» is False, since only a subset of the .ConstraintVector
index is returned, OptFindIIS() relaxes only constraints, leaving variable bounds in tact.
Default: 0
Allowed range: 0 or 1
Numeric estimation
Derivatives
The Derivatives setting controls how derivatives are computed. These values are possible:
1 = forward
: This is the default if Jacobian and gradient parameters are not supplied. The Optimizer estimates derivatives using forward differencing, i.e.,
 $ \frac {\part}{\part(x)}=\frac {f(x+\Delta)  f(x)}{\Delta} $
2 = central
: The Optimizer estimates derivatives using central differencing, i.e.,
 $ \frac {\part}{\part(x)}=\frac {f(x+\Delta)  f(x\Delta)}{2\Delta} $
3 = jacobian
: The Optimizer computes derivatives using the supplied Jacobian and gradient expressions. This is the default if these are supplied.4 = check
: The Optimizer computes derivatives using the supplied Jacobian expression and also estimates the Jacobian using finite differencing. If they don’t agree to within a small tolerance, the optimization aborts with OptStatusNum = 67 (“error in evaluating problem functions”). This option is useful for testing whether the Jacobian is accurate.
StepSize
The step size used to estimate derivatives numerically. This is the value in the estimates listed in the preceding Derivatives description.
Default: 10^{6}
Allowed range: 10^{9} to 10^{4}
SearchOption
Controls how the gradientbased search determines the next point to jump to during search:

0 = Newton
: Uses a quasiNewton method, maintaining an approximate Hessian matrix for the reduced gradient function. 
1 = Conjugategradient
: Use a conjugate gradient method, which does not require the Hessian.
Default: 0
Allowed range: 0 or 1
Estimates
The Estimates setting controls the method used to estimate the initial values for the basic decision variables at the beginning of each onedimensional line search:

0 = linear
: Uses linearextrapolation from the line tangent to the reduced objective function. 
1 = quadratic
: Extrapolates to the extrema of a quadratic fitted to the reduced objective at its current point.
Default: 0
Allowed range: 0 or 1
RecognizeLinear
When set to 1, the Optimizer attempts to detect automatically decision variables that influence the objective and constraints in a linear fashion. It can then save time by precomputing partial derivatives for these variables for the rest of the search. This aggressive strategy can create problems when a dependence changes dramatically throughout the search space, particularly when a decision variable is near linear around the starting point, but the gradient changes elsewhere in the search space. When the solution is reached, the Optimizer recomputes the derivatives and verifies them against the assumed values. If they do not agree, the status text “The linearity conditions required by this solver engine are not satisfied” is returned.
Engine: GRG Nonlinear
Default: 0 (select default)
Allowed range: 0 or 1
SOCP barrier search
In addition to the many search control settings available of linear programs (covered in the previous chapter), a few additional settings can be used to control the search when solving quadratically constrained problems using the SOCP Barrier engine. These parameters are set using the «settingName» and «settingValue» parameters to DefineOptimization, as described in Specifying Settings.
SearchDirection
Controls the search direction on each iteration of the SOCP Barrier engine. The Power class method is a technique with the longstep barrier algorithm leading to a polynomial complexity. The dual scaling method uses HKM (Helmberg, Kojima, and Monteiro) dual scaling in which a Newton direction is found from the linearization of a symmetrized version of the optimality conditions. Either of these can be further modified by a predictorcorrector term.
Default: 0 (off)
Allowed range: 1 = Power class, 2 = Power class with predictorcorrector, 3 = dual scaling, or 4 = dual scaling with predictorcorrector.
Engine: SOCP Barrier
PowerIndex
This parameter is used to select a particular search direction when SearchDirection is set to 1 or 2.
Default: 1
Allowed range: nonnegative integer
Engine: SOCP Barrier
StepSizeFactor
The relative step size (between 0 and 1) that the SOCP Barrier engine can take towards the constraint boundary at each iteration.
Default: 0.99
Allowed range: 0.00 to 0.99
Engine: SOCP Barrier
GapTolerance
The SOCP Barrier Solver uses a primaldual method that computes new objective values for the primal problem and the dual problem at each iteration. When the gap or difference between these two objective values is less than the gap tolerance, the SOCP Barrier Solver stops and declares the current solution optimal.
Engine: SOCP Barrier
Default: 10^{6}
Allowed range: 0 to 1
FeasibilityTolerance
The SOCP Barrier engine considers a solution feasible when the constraints are satisfied to within this relative tolerance.
Engine: SOCP Barrier
Default: 10^{6}
Allowed range: 0 to 1
Evolutionary search controls
PopulationSize
Controls the population size of candidate solutions maintained by the Evolutionary engine, or the number of starting points for MultiStart in the GRG Nonlinear engine. MultiStart has a minimum population size of 10. If you specify 0, or any number smaller than 10, then the number of starting points used is 10 times the number of decision variables, but no more than 200.
Engine: GRG Nonlinear, Evolutionary
Default: 0 (automatic)
Allowed range: 0, or integer >= 10
MutationRate
The probability that the Evolutionary Optimizer engine, on one of its major iterations, will attempt to generate a new point by “mutating” or altering one or more decision variable values of a current point in the population of candidate solutions.
Engine: Evolutionary
Default: 0.075
Allowed range: 0 to 1
ExtinctionRate
This determines how often the Evolutionary engine throws out its entire population, except for the very best candidate solutions, and starts over from scratch.
Engine: Evolutionary
Default: 0.5
Allowed range: 0 to 1
RandomSeed
Both engines use a pseudorandom component in their search for an optima. Thus, the final result can differ each time an optimization of the exact same problem is performed. By setting the random seed, you can ensure that the same sequence of pseudorandom numbers is used, so that the same result obtains every time the same problem is reevaluated. If you do not specify the random seed, Analytica uses its internal random seed, so that when you first load a model and evaluate results in a fixed order, you get a predictable result. Setting RandomSeed to 0 causes the pseudorandom generated to be seeded using the system clock. Any positive value sets the initial seed to a fixed number.
Engine: GRG Nonlinear, Evolutionary
Default: (use Analytica’s random seed)
Allowed range: nonnegative integer
Feasibility
When set to 1, the Evolutionary engine throws out all infeasible points, and keeps only feasible points in its population. When set to 0, it accepts feasible points in the population with a high penalty in the fitness score, which tends to be useful when it has a hard time finding feasible points.
Default: 0
Allowed range: 0 or 1
LocalSearch
Selects the local search strategy employed by the Evolutionary engine. In one step, or generation, of the algorithm, a possible mutation and a crossover occur, followed by a local search in some cases, followed by elimination of unfit members of the population. This parameter controls the method used for this local search. The decision for whether to apply a local search at a given generation is determined by two tests. First, the objective value for the starting point must exceed a certain threshold, and second, the point must be sufficiently far from any already identified local extrema. The threshold is based on the best objective found so far, but is adjusted dynamically as the search proceeds. The distance to local optima threshold is based on distance traveled previous times the local optima was reached.
There is a computational tradeoff between the amount of time spent in local searches,versus the time spent in more global searches. The value of local searches depends on the nature of your problem. Roughly speaking, the Randomized method is the least expensive and the gradient method tends to be the most expensive (i.e., with more time devoted to local searches rather than global search).
Engine: Evolutionary
Default: 0
Allowed range: 0 to 3

1
= Randomized Local Search: Generates a small number of new trial points in the vicinity of the justdiscovered “best” solution. Improved points are accepted into the population. 
2
= Deterministic Pattern Search: Uses a deterministic “pattern search” method to seek improved points in the vicinity of the justdiscovered “best” solution. Does not make use of the gradient, and so is effective for nonsmooth functions. 
3
= Gradient Local Search: Uses a quasiNewton gradient descent search to locate an improved point to add to the population.
FixNonSmooth
Determines how nonsmooth variables are handled during the local search step. If set, then only linear and nonlinear smooth variables are allowed to vary during the local search. Because gradients often exist at most points, even for discontinuous variables, leaving this off can still yield useful information in spite of the occasional invalid gradient.
Engine: Evolutionary
Default: 0
Allowed range: 0 or 1
Mixedinteger controls
Integer branch and bound
IntCutoff
If you can correctly bound the objective function value for the optimal solution in advance, this can drastically reduce the computation time for MIP problems, since the branchandbound algorithm to prune entire branches from the search space without having to explore them at all. For a maximization problem, specify a lower bound, and for a minimization problem, specify an upper bound. If you specify this parameter, you need to be sure that there is an integer solution with an objective value at least this good, otherwise the Optimizer might skip over, and thus never find, an optimal integer solution.
Default: no bounding
UseDual
When True, the LP/Quadratic engine uses the dual simplex method, starting from an advanced basis, to solve subproblems generated by the branchandbound method. When False, it uses the primal simplex method to solve subproblems. Use of dual simplex often speeds up the solution of mixedinteger problems.
The subproblems of an integer programming problem are based on the relaxation of the problem, but have additional or tighter bounds on the variables. The solution of the relaxation (or of a more direct “parent” of the current sub problem) provides an “advanced basis” which can be used as a starting point for solving the current subproblem, potentially in fewer iterations. This basis might not be primal feasible due to the additional or tighter bounds on the variables, but it is always dual feasible. Because of this, the dual simplex method is usually faster than the primal simplex method when starting from an advanced basis.
Default: 2
Allowed range: 1 = Primal 2 = Dual
ProbingFeasibility
Probing is a preprocessing step during which the solver attempts to deduce the values for certain binary integer variables based on the settings of others, prior to actually solving a subproblem. While solving a mixedinteger problem, probing can be performed on each subproblem before running a constrained simplex. As branchandbound fixes one variable to a specific binary value, this can cause the values for other binary variables to become determined. In some cases, probing can identify infeasible subproblems even before solving them. In certain types of constraint satisfaction problems, probing can reduce the number of subproblems by orders of magnitude.
Default: 0
Allowed range: 0 or 1
BoundsImprovement
This strategy attempts to tighten bounds on variables that are not 01 or binary variables, based on values that have been derived for binary variables, before subproblems are solved.
Default: 0
Allowed range: 0 or 1
OptimalityFixing
This strategy attempts to fix the values of binary integer variables before each subproblem is solved, based on the signs of coefficients in the objective and constraints. As with BoundsImprovement and ProbingFeasibility, this can result in faster pruning of branches by the branchandbound search; however, in some cases optimality fixing can yield incorrect results. Specifically, OptimalityFixing creates incorrect results when the set of inequalities imply an equality constraint. Here is an example:
Constraint Ct1 := X + 2*Y + 3*Z <= 10
Constraint Ct2 := X + 2*Y + 3*Z >= 10
This implies an =10 constraint. You must also watch out for more subtle implied equalities, such as where it is possible to deduce the value of a variable from the inequalities. Such equalities must be represented explicitly as equalities for OptimalityFixing to work correctly.
Default: 0
Allowed range: 0 or 1
PrimalHeuristic
This strategy attempts to discover a feasible integer solution early in the branchandbound process by using a heuristic method. The specific heuristic used by the LP simplex solver is one that has been found to be quite effective in the “local search” literature, especially on 01 integer programming problems, but which not guaranteed to succeed in all cases in finding a feasible integer solution. If the heuristic method succeeds, branchandbound starts with a big advantage, allowing it to prune branches early. If the heuristic method fails, branch and bound begins as it normally would, but with no special advantage, and the time spent with the heuristic method is wasted.
Default: 0
Allowed range: 0 or 1
LocalHeur,RoundingHeur,LocalTree
These strategies look for possible integer solutions in the vicinity of known integer solution using a local heuristic (“local search heuristic” or “rounding heuristic”), adjusting the values of individual integer variables. As with PrimalHeuristic, finding an integer solution can help improve bounds used by the search, and thus prune off portions of the search tree.
Engine: LP/Quadratic
Default: 0
Allowed range: 0 or 1
FeasibilityPump
An incumbent finding heuristic used by branchandbound to find good incumbents quickly.
Engine: LP/Quadratic
Default: 1
Allowed range: 0 or 1
GreedyCover
Another incumbent finding heuristic used by branchandbound to find good incumbents quickly.
Engine: LP/Quadratic
Default: 0
Allowed range: 0 or 1
Cut generation control
Cut generation options are available for the LP simplex method and is used when solving integer or mixedinteger LP problems.
A cut is an automatically generated constraint that “cuts off” some portion of the feasible region of an LP subproblem without eliminating any possible integer solutions. Many different cut methods are available each of which are capable of identifying different forms of constraints among integer variables that can be leveraged to quickly reduce the feasible set, and thus prune the branchandbound search tree. However, each of these methods requires a certain amount of work to identify cut opportunities, so that when opportunities are not identified, that effort can be wasted. The defaults are set in ways that represent a reasonable tradeoff for most problems, but for hard integer problems, you can experiment with these to find the best settings for your own problem. You might find that some methods are more effective than others on your particular problem.
MaxRootCutPasses
Controls the maximum number of cut passes carried out immediately after the first LP relaxation is solved. This has an effect only if one of the cut method options is on. If this is set to a value of 1, the number of passes is determined automatically. The setting MaxTreeCutPasses is used for all iterations after the first.
Engine: LP/Quadratic
Default: 1 (automatically determined)
Allowed range: 1or more
MaxTreeCutPasses
Controls the maximum number of cut passes carried out at each step of the solution process with the exception of the first cycle. This setting is used only if at least one cut method is on. Each time a cut is added to a problem, this can produce further opportunities for additional cuts, hence cuts can continue to be added until no more cuts are possible, or until this maximum bound is reached.
Engine: LP/Quadratic
Default: 10
Allowed range: 0 or more
GomoryCuts
Gomory cuts are generated by examining the inverse basis of the optimum solution to a previous solved LP relaxation subproblem. The technique is sensitive to numeric rounding errors, so when used, it is important that your problem is wellscaled. It is recommended that you set the Scaling settings to 1 when using Gomory cuts.
Engine: LP/Quadratic
Default: 0
Allowed range: 0 or 1
MaxGomoryCuts
This is the maximum Gomory cuts that should be introduced into a given subproblem.
Default: 20
Allowed range: nonnegative
GomoryPasses
The number of passes to make over a given subproblem looking for possible Gomory cuts. Each time you add a cut, this can present opportunities for new cuts. It is actually possible to solve an LP/MIP problem simply by making continual Gomory passes until the problem is solved, but typically this is less efficient than branch and bound. However, that can be different for different problems.
Default: 1
Allowed range: nonnegative
KnapsackCuts
Knapsack cuts are only used with groupedinteger variables (whereas Gomory cuts can be used with any integer variable type). These are also called lifted cover inequalities. This setting controls whether knapsack cuts are used.
Engine: LP/Quadratic
Default: 0
Allowed range: 0 or 1
MaxKnapsackCuts
The maximum number of knapsack cuts to introduce into a given subproblem.
Default: 20
Allowed range: nonnegative
KnapsackPasses
The number of passes the solver should make over a given subproblem, looking for knapsack cuts.
Default: 1
Allowed range: nonnegative
ProbingCuts
Controls whether probing cuts are generated. Probing involves setting certain binary integer variables to 0 or 1 and deriving values for other binary integer variables, or tightening bounds on the constraints.
Engine: LP/Quadratic
Default: 1
Allowed range: 0 or 1
OddHoleCuts
Controls whether odd hole cuts (also called odd cycle cuts) are generated. This uses a method due to Grotschel, Lovasz, and Schrijver that apply only to constraints that are sums of binary variables.
Engine: LP/Quadratic
Default: 0
Allowed range: 0 or 1
MirCuts, TwoMirCuts
Mixedinteger rounding cuts and two mixedinteger rounding cuts.
Engine: LP/Quadratic
Default: 0
Allowed range: 0 or 1
RedSplitCuts
Reduce and split cuts are a variant of Gomory cuts.
Engine: LP/Quadratic
Default: 0
Allowed range: 0 or 1
SOSCuts
Special ordered sets (SOS) refer to constraints consisting of a sum of binary variables equal to 1. These arise common in certain types of problems. In these constraints, in any feasible solution exactly one of the variables in the constraint must be 1, and all the others zero, such that only n permutations need to be considered, rather than 2^{n}.
Engine: LP/Quadratic
Default: 0
Allowed range: 0 or 1
FlowCoverCuts
Controls whether flow cover cuts are used.
Engine: LP/Quadratic
Default: 0
Allowed range: 0 or 1
CliqueCuts
Controls whether clique cuts can be used, using a method due to Hoffman and Padberg. Both row clique cuts and start clique cuts are generated.
Engine: LP/Quadratic
Default: 0
Allowed range: 0 or 1
RoundingCuts
A rounding cut is an inequality over all integer variables formed by removing any continuous variables, dividing through by the greatest common denominator of the coefficients, and rounding down the righthand side.
Engine: LP/Quadratic
Default: 0
Allowed range: 0 or 1
LiftAndCoverCuts
Lift and cover cuts are fairly expensive to compute, but when they can be generated, they are often very effective in cutting off portions of the LP feasible region, improving the speed of the solution process.
Engine: LP/Quadratic
Default: 0
Allowed range: 0 or 1
Coping with local optima
MultiStart
When turned on, the GRG engine restarts at multiple starting points, following the gradient from each to its corresponding local optima. Starting points are selected randomly between the specified lower and upper variable bounds, and clustered using a method (called multilevel single linkage. The solver selects a representative point from each cluster, and then continues to successively smaller clusters based on the likelihood of capturing undiscovered local optima. Best results are obtained from MultiStart when your variable upper and lower bounds are finite with as narrow range as possible. If finite bounds are not specified, you must set RequireBounds to 0. PopulationSize controls the number of starting points. TopoSearch can be set for a more sophisticated method of selecting starting points.
Engine: GRG Nonlinear
Default: 0 (off)
Allowed range: 0 or 1
RequireBounds
When MultiStart is used to select random starting positions, points between the bounds specified for each variable are sampled. If finite bounds on some variables are not specified, then MultiStart can still be used, but is likely to be less effective because starting value must be selected from an infinite range, which is unlikely to cover all possible starting points, and thus is unlikely to find all the local optima. When RequireBounds is on, as it is by default, an error results if you have not specified finite bounds on variables and have selected the MultiStart method, so as to remind you to specify bounds. If you really intend to use Multistart without finite bounds on the variables, you must explicitly set RequireBounds to 0.
When using the Evolutionary engine, finite bounds are also important in order to ensure a appropriate sampling for an initial population. Although it can still function without bounds, the infinite range that must be explored can dramatically slow down amount required to find a solution, and thus it is recommended that you always specify finite upper and lower bounds when using the Evolutionary engine. If RequireBounds is 1 (the default) when no bounds are specified, an error is reported in order to encourage the use of bounds.
Engine: GRG Nonlinear, Evolutionary
Default: 1 (on)
Allowed range: 0 or 1
TopoSearch
Only used when MultiStart is 1. When set to 1, the MultiStart method uses a topographic search method that fits a topographic surface to all previously sampled starting points in order to estimate the location of hills and valleys in the search space. It then uses this information to find a better starting points. Estimating topography takes more computing time, but in some problems that can be more than offset from the improvements in each GRG search.
Engine: GRG Nonlinear
Default: 0 (off)
Allowed range: 0 or 1
See Also
 Selecting the Optimization Engine
 DefineOptimization
 OptInfo
 OptFindIIS
 OptWriteIIS
 OptStatusNum
 Math functions
 Advanced math functions
 Complex number functions
Enable comment autorefresher