Defines a non-linear programming problem.
Requires Analytica Optimizer, Analytica Power Player with Optimizer, or ADE with Optimizer.
NlpDefine accepts a large number of optional parameters, allowing many variations in the formulation of NLPs. See the Parameters section below.
As of Analytica 4.3, NlpDefine has been removed from the Definition menu and replaced by DefineOptimization. DefineOptimization is usually far more convenient, especially if you have multi-dimensional decision variables, multi-dimensional constraints, multiple decision variable nodes, or multiple constraint types.
As of Analytica 4.3 there are some more esoteric features exposed by NlpDefine that are not available (yet) in DefineOptimization. These are: (a) The ability to provide the clues «objNl» and «lhsNl» and (b) the ability to provide analytic gradients and jacobians expressions.
What is a non-linear program?
A non-linear program is an optimization problem where you wish to find a vector for the decision variables that minimizes (or maximizes) an objective function subject to a set of constraints and variable bounds. Special cases also include unconstrained optimization and finding a solution to a system of non-linear equations. Decision variables may be continuous or mixed-integer.
Linear and quadratic optimization problems can be formulated as NLPs, but when it is possible, you are much better off encoding these as linear or quadratic programs using LpDefine or QpDefine. Linear and quadratic formulations more readily array abstract and suffer from fewer numeric convergence issues.
Structuring an NLP
To encode an NLP in an Analyica model, you begin by identifying a decision variable in your model, which will be set to candidate solutions during the optimization search. This decision variable is nominally a one-dimensional vector, so that there will be an index associated with it, as in the following:
Index V := ['x1','x2','x3'] Variable X := Table(V)
You can also use a self-indexed table for X, in which X serves as both the index and the decision variable.
Using the values of V, your model will compute an objective variable, let's suppose it is named Y. You may have a very complex model, with large numbers of variables between X and Y, utilizing arbitrary Analytica expressions. Y is the objective. For an unconstrained optimization problem, these provide the essential pieces.
It is also possible to use a local variable for X, defined using Var..Do. X may then appear in the expression provided to NlpDefine's parameters obj, lhs, gradient, jacobian, or hessian. This is often used when an NLP is defined inside a User-Defined Function.
An unconstrained minimization problem can be defined using:
NlpDefine( Vars: V, X:X, Obj:Y )
Evaluation of NlpDefine this returns an NLP object, which displays as <<NLP>> in the result table. This is a problem definition, from which various pieces of information can be extracted, including the solution.
To solve the NLP, use the LpSolution function, which returns a vector, indexed by V (the index provided as the first parameter, Vars), with the solution found. However, it is also important that you check the status using LpStatusText or LpStatusNum to determine why the search terminated and whether a solution was successfully found. Thus, you may have variables such as the following:
Variable My_Nlp := NlpDefine( Vars: V, X:X, Obj: Y ) Variable NlpStatus := LpStatusText(My_Nlp) Variable X_opt := LpSolution(My_Nlp)
Note that after the search completes, the optimal solution is in X_opt, rather than in X. It is common to use a button with a script such as: (X := X_opt), where the surrounding parens are necessary, when you want X to be set.
A further special case of unconstrained optimization occurs when the decision variable is a scalar, e.g., find scalar X that minimizes Y, or that minimizes (Y-g)^2. In this case, the Vars index is not necessary, and a simple problem can be defined as:
NlpDefine( X:X, Obj:Y )
Since so many of the parameters of NlpDefine are optional, it is standard to used a named-parameter syntax.
There is also a GoalSeek function in a library included with Analytica, with does not require Analyica Optimizer. The GoalSeek function uses a very simple Newton-Rapson descent algorithm that may be sufficient in many cases, but provides fewer options for diagnosing and addressing convergence problems on hard optimization problems. In general, NlpDefine provides higher quality algorithms.
In constrained optimization, which typically is associated with NLPs, we also have one or more non-linear constraints. Here we have constraints of the form
- g_j(X,V) = b_j
Let's look at an example with a single non-linear constraint. Suppose we want to minimize surface area of a cylinder that is constrained to have a volume of 1, where surface_area and volume are computed in variables of those names. The NLP with one constraint can be defined as
NlpDefine( Vars:V, X:X, obj:surface_area, lhs:volume, sense:'=', rhs:1, lb:0 )
In this formulation, the constraint is encoded using the three parameters lhs, sense, and rhs. Of these, only the lhs can depend on X. The other two, sense and rhs, do not vary during the optimization search. We also include a variable bound here, lb, so that the dimensions (elements of X) are constrained to be non-negative. Note that the obj or lhs could be an expression referencing X directly.
When we have two or more constraints, we must also create a constraint index, to index the set of constraints. Generally the constraint index is defined as a list of labels, with each label providing a meaningful name for the constraint. It is easiest to set up the left-hand side in a table, in its own variable. (Here we've changed the volume constraint from an equality to a minimum required volume).
Index Constraints := ['min volume','taller than wide'] Variable lhs := Table(constraints)(volume-1,h-w) Variable my_Nlp := NlpDefine( Vars:V,Constraints:constraints,obj:surface_area, Lhs:lhs, sense:'>',rhs:0, lb:0 )
NlpDefine accepts a large number of optional parameters. Because of this, it is best to use a named-parameter syntax rather than relying on parameter position. Here are parameters that are possible:
- Vars: optional index. If your solution is vector-valued (anything other than a scalar), an index corresponding to the set of scalar decision variables is required.
- Constraints: optional index: If you have two or more constraints, this index is required to index these constraints. Often defined as a list of labels.
- X: Identifies a variable that holds the candidate solution at each iteration of the search. A local variable identifier or a standard variable identifier may be supplied here. The value there initially is used as the starting point for the search, unless the guess parameter is supplied.
- Obj: optional expression. The objective function. This should depend on X, and is re-evaluated at each iteration of the search.
- Lhs: optional expression. The left-hand sides of all constraints. These should depend on X, and if there is more than one constraint, should be indexed by the Constraints index. This expression is re-evaluated at each iteration of the search. This may be omitted in an unconstrained optimization problem.
- Rhs: optional. The right-hand side of constraints. This is evaluated when the problem is defined and is not re-evaluated at each iteration. It should not depend on X -- you should move any terms depending on X to the left-hand side. When the rhs is different for different constraints, this should be indexed by Constraints.
- Sense: optional. One of "<", "=", or ">", or "L", "E", "G". The inequalities are non-strict, so "<" means that the lhs is constrained to be less than or greater than the rhs. Defaults to "<". If the sense is different for different constraints, then this should be indexed by Constraints.
- maximize: optional boolean. When omitted, a minimization problem is defined. When Maximize:True is specified, the object is maximized.
- lb, ub: optional. Lower and upper bounds on the decision variables. If unspecified, the variables are unconstrained from -INF to INF. If different bounds exist for different decision variables, these should be indexed by Vars.
- Ctype: optional, one of "C","B","I","G". Specifies whether the solution variables are continuous or integer values. "C" (default)=continuous, "B"=binary 0/1 valued, "I"=integer valued, "G"=group valued (see Group). For mixed-integer, where some variables are continuous and others integer, Ctype will be indexed by Vars.
- ObjNl, LhsNl: optional, one of "L","N","D". Specifies whether the dependence of the objective, or lhs, is linear (L), non-linear continuous (N), or discontinuous (D). This information is used by the algorithms to select the most appropriate techniques for the problem and may speed search. ObjNl may be indexed by Vars if the objective is linearly dependent on some decision variables but non-linearly dependent on others. Likewise, LhsNl may be indexed by Vars and Contraints in general.
- Guess: optional. An initial guess, used as the starting point for the search. When omitted, the current value of X is used. Guess is usually indexed by Vars. If you include another index here, you'll define multiple NLPs starting from different starting positions, which provides one mechanism for dealing with the problem of local minima by using multiple starting points.
- Gradient,Jacobian,hessian,lhsHessian: optional expressions. These are all expressions that depend on X and compute the gradient or hessian of the objective function, or the jacobian and hessian of the constraint lhs. When correctly supplied to NlpDefine, these can speed up the search by eliminating extra evaluations otherwise carried out to estimate these derivatives numerically, and may help with numeric "noise" that is often present in numeric estimations of derivatives. The Hessians are not used by the engines supplied with Analytica Optimizer, but may be used by some add-on engines.
- Group: optional. Used with group-integer variables. When N variables belong to the same group, a solution requires each in the gruop to have a value from 1..N, with each having a different value. This parameter specifies the group number for each variable. If variables are partitioned into multiple groups, then this parameter would be indexed by Vars, with some having the value 1, other the value 2, etc., indicating which variables belong to the first group, second group, etc.
- Engine: optional. Specifies which engine is to be used to solve the problem. The standard Analytica Optimizer license allows "GRG Nonlinear" or "Evolutionary". If you have a license for an add-on engine, this may be something else, such as "Knitro", etc. To see possible values, evaluate: SolverInfo("Availengines")
- SettingName,SettingValue: optional. A value or values of search control parameters. See the section on Specifying Search Control parameters below.
- TraceFile: The name of a filename. When the algorithm runs, a trace of the search is written to this file, which is often useful for debugging convergence problems. The file can be viewed in any text editor. Relative filenames are interpreted relative to the CurrentDataDirectory.
- SetContext: Optional list of variables. Zero or more context variables that are to be array abstracted over, so that a separate optimization is to be carried out for each variable. See Getting NLPs to Array Abstract.
- Over: optional list of indexes. Specifies that a separate optimization is to be carried out over each index that is listed. See Getting NLPs to Array Abstract.
Specifying Search Control Settings
In order to get acceptable results on hard optimization problems, it is sometimes necessary to adjust the settings of the optimizer engine. This may include adjustments to numeric precision, population size, time limits or other termination criteria, etc.
Search control settings are specified via the Parameter and Setting parameters of NlpDefine. For example, when using the Evoluationary solver, you may want to set the mutation rate, which would be done as follows:
NlpDefine( ..., Engine:"Evolutionary", Parameter:"MutationRate", Setting: 0.15 )
In order to change a setting you must know the setting name and the value you wish to use. The possible settings names depend on which engine is used, but the possible settings for any engine can be accessed using the SolverInfo function. To see the set of parameters and their default values and allowed ranges for the evolutionary solver, use:
SolverInfo( Engine:"Evolutionary", Item:["MinSetting","MaxSetting","Defaults"] )
The above example sets a single setting, but in general you may need to specify multiple search settings. To do this, Parameter and Setting must have a common index. A common way to do this is to define a self-indexed table, filling in the setting names as index labels and the table body cells with the settings. When set up in this fashion, specify only the Setting parameter, omitting Parameter, and Analytica will automatically find the parameter names in the index labels.
Debugging Convergence Issues
Non-linear optimization problems can be very difficult to solve, so it is often necessary to debug what is occurring during the search. A feature new to 4.0 that is useful for monitoring search progress is the TraceFile. By specifying an optional TraceFile parameter, supplying a file name, Analytica will write a trace of the search to that file. You can then view the file afterwards in any standard text editor. The trace file records the point searched, objective at that point, and if used, gradients and jacobians.
An example usage of this parameter is:
NlpDefine( ..., TraceFile: "C:\trace.log" )
If a full file path is not specified, the filename is treated relative to the CurrentDataDirectory.
Getting NLPs to Array Abstract
Analytica's ability to array-abstract over extraneous indexes is a core feature, and perhaps the most powerful and useful feature in Analytica. However, Analytica's array abstraction cannot always automatically abstract over non-linear optimization problems. (Note: Linear and quadratic optimization problems are fully array abstractable, which is a key reason they are preferred over non-linear formulations when at all possible.)
If you have defined an NLP, but an extraneous dimension is present, it may imply that several NLPs need to be solved independently. In many cases, Analytica will array abstract over any extraneous dimensions automatically. In fact, this will occur if the extraneous dimension appears in any parameter to NlpDefine other than Obj, Lhs, Gradient, Jacobian, Hessian, or LhsHessian. For example, suppose we had a variable Required_volume, with the constraint that volume >= required_volume, encoded as:
NlpDefine( ..., lhs: volume, sense:">", rhs: required_volume )
When required_volume is set to 1, we have a single optimization problem. But when required_volume is set to a list of values, such as [1,2,3,4], we now have multiple NLPs to solve, one for each value of required_volume.
However, array abstraction is not always automatic (or can be inefficient) when the extraneous dimensions appears in the obj or lhs parameters. Here, the optimizer selects a candidate solution vector, plugs it into X, and evaluates the objective function. If the objective function comes out to be an array (i.e., a non-scalar), then the optimization is ambiguous -- i.e., which element of that array is being maximized? This really should be treated as multiple independent optimization problems, but the presence of more than one problem isn't detected until the search has started.
If an extraneous dimension can feed into Obj or Lhs (or into Gradient, Jacobian, Hessian or lhsHessian when those are used), a variety of techniques are possible to structure your model and optimization problem so that it can still flexibly array abstract. However, although array abstraction is usually fully automatic in Analytica, in this case the modeler may have to put in some extra effort to ensure that it is abstractable.
One very effective approach, when it is possible, is to encode the objective and lhs constraints as user-defined functions that take the decision vector, X, as a parameter. In this structuring, the entire computation is within user-defined functions, rather than variable objects. One can then wrap the NlpDefine function in its own user-defined function, passing all inputs to it as parameters with the expected dimensionality explicitly specified. Inside that function, a local variable, X, is declared using Var..Do. This structuring, when done right, guarantees full array abstractability, and will return one NLP object for each separate optimization. A disadvantage to this approach is all the logic to compute the object must be encased within user-defined functions, which can hamper transparency, and may not be readily applicable to a model that already exists as an influence diagram.
When the objective function (and constraint) logic is encoded in the form of an influence diagram, two optional parameters to NlpDefine (and new to 4.0) may be utilized to ensure array abstractability, namely SetContext and Over.
In virtually all cases, the dimensionality of the objective will be the same for every possible decision vector. So if you evaluate the objective function once before the search starts, you can detect exactly which dimensions are always present. If you place these dimensions (or an array having these dimensions) in the Over parameter to NlpDefine, then Analytica will set up a separate optimization problem for each combination of those dimensions. So, for example, the following can be used:
NlpDefine( Vars:Vars, X:X, obj: Y, Over: Y, ... )
If you have set up your X variable initially so that it is dimensioned only by Vars, then when Y is evaluated before the search begins, this value will reveal any extraneous dimensions. The Over parameter tells Analytica about these before the optimization search begins, and allows NlpDefine to return separate NLP objects for each separate optimization problem.
Alternatively, if you know in advance that indexes I and J will occur extraneously in obj or lhs, then you can list these explicitly without having to infer them:
NlpDefine( Vars:Vars, X:X, Obj:Y, lhs:lhs, Over:I,J )
Using the Over parameter as the only solution to this may provide correct results, but in many cases may result in huge inefficiencies in the computation, possibly slowing down the optimization search by orders of magnitudes. Hence, when extraneous dimensions can be identified in advance, judicious use of SetContext can be preferred.
When Over:I is used, a separate NLP instance is created for each value of I. However, when one of these is being solved, the model may introduce the I dimension into its array values and carry this along all the way to the objective. The NLP knows which value of I it is handling, and it will automatically slice out the element of the objective that this particular NLP instance needs, but the time spent computing the results for those other values of I is thrown away and wasted. In contrast, the SetContext parameter can be used to prevent the I dimension from being introduced into the computation in the first place.
For an example, suppose our decision vector is a portfolio allocation, the amount of money that should be placed in each possible investment vehicle. The objective is some function of this allocation and discount_rate. When discount rate is a single value, the objective is a single value and no abstraction is necessary. But now you run your model for a list of possible discount rates. The best option here is to define the NLP in the context of discount_rate, as follows:
NlpDefine( Vars:Investment, X:allocation, ..., SetContext: Discount_rate )
When defined in this fashion, Analytica set discount_rate to the first value, solve that optimization problem to completion. The discount_rate index never gets introduced into the computation, so no extra computation is wasted. When that computation completes, it sets discount_rate to the second value and runs the optimization. When these have all been completed, the resulting solutions are indexed by Discount_rate.
SetContext can be used when discount_rate is defined as a choice, where it might be an index, but might be only a scalar. In the scalar case, SetContext is not necessary, but does no harm. If there are several variables such as discount_rate in your model, they can be listed in the SetContext parameter, e.g.:
NlpDefine(..., SetContext: Discount_rate, Interest_rate, Capital_investment )
SetContext cannot automatically adapt to arbitrary introductions of new dimensions, but if you know which extraneous dimensions might be introduced, it can ensure that your NLP will array abstract over those dimensions.