mxOption {OpenMx} | R Documentation |
Set or Clear an Optimizer Option
Description
The function sets, shows, or clears an option that is specific to the optimizer in the back-end.
Usage
mxOption(model=NULL, key=NULL, value, reset = FALSE)
Arguments
model |
An MxModel object or NULL |
key |
The name of the option. |
value |
The value of the option. |
reset |
If TRUE then reset all options to their defaults. |
Details
mxOption is used to set, clear, or query an option (given in the ‘key’ argument) in the back-end optimizer. Valid option keys are listed below.
Use value = NULL to remove an existing option. Leaving value blank will return the current value of the option specified by ‘key’.
To reset all options to their default values, use ‘reset = TRUE’. When reset = TRUE, ‘key’ and ‘value’ are ignored.
If the ‘model’ argument is set to NULL, the default optimizer option (i.e those applying to all models by default) will be set.
To see the defaults, use getOption('mxOptions')
.
Before the model is submitted to the back-end, all keys and values are converted into strings using the as.character function.
Optimizer specific options
The “Default optimizer” option can only be set globally (i.e., with model=NULL
), and not locally (i.e., specifically to a given MxModel). Although the checkpointing options may be set globally, OpenMx's behavior is only affected by locally set checkpointing options (that is, global checkpointing options are ignored at runtime).
Gradient-based optimizers require the gradient of the fit
function. When analytic derivatives are not available,
the gradient is estimated numerically. There are a variety
of options to control the numerical estimation of the
gradient. One option for CSOLNP and SLSQP is
the gradient algorithm. CSOLNP uses the forward
method
by default, while SLSQP uses the central
method. The
forward
method requires 1 time “Gradient iterations”
function evaluation per parameter
per gradient, while central
method requires 2 times
“Gradient iterations” function evaluations per parameter
per gradient. Users can change the default methods for either of these
optimizers by setting the “Gradient algorithm” option.
NPSOL usually uses the forward
method, but
adaptively switches to central
under certain circumstances.
Options “Gradient step size”, “Gradient iterations”, and “Function precision” have on-load global defaults of "Auto"
. If value "Auto"
is in effect for any of these three options at runtime, then OpenMx selects a reasonable numerical value in its place. These automated numerical values are intended to (1) adjust for the limited precision of the algorithm for computing multivariate-normal probability integrals, and (2) calculate accurate numeric derivatives at the optimizer's solution. If the user replaces "Auto"
with a valid numerical value, then OpenMx uses that value as-is.
By default, CSOLNP uses a step size of 10^-7 whereas SLSQP uses 10^-5. The purpose of this difference is to obtain roughly the same accuracy given other differences in numerical procedure. If you set a non-default “Gradient step size”, it will be used as-is. NPSOL ignores “Gradient step size”, and instead uses a function of mxOption “Function precision” to determine its gradient step size.
Option “Analytic Gradients” affects all three optimizers, but some options only affect certain optimizers. Option “Gradient algorithm” is used by CSOLNP and SLSQP, and ignored by NPSOL. Option “Gradient iterations” only affects SLSQP. Option “Gradient step size” is used slightly differently by SLSQP and CSOLNP, and is ignored by NPSOL (see mxComputeGradientDescent()
for details).
If an mxModel contains mxConstraints, NPSOL is given .4 times the value of the option “Feasibility tolerance”. If there are no constraints, NPSOL is given a hard-coded value of 1e-5 (its own native default).
Note: Where constraints are present, NPSOL is given 0.4 times the value of the mxOption “Feasibility Tolerance”, and this is about a million times bigger than NPSOL's own native default. Values of “Feasibility Tolerance” around 1e-5 may be needed to get constraint performance similar to NPSOL's default. Note also that NPSOL's criterion for returning a status code of 0 versus 1 for a given solution depends partly on “Optimality tolerance”.
For a block of n
ordinal variables, the maximum number of integration points that OpenMx may use to calculate multivariate-normal probability integrals is given by
mvnMaxPointsA + mvnMaxPointsB*n + mvnMaxPointsC*n*n +
exp(mvnMaxPointsD + mvnMaxPointsE * n * log(mvnRelEps))
.
Integral approximation is stopped once either ‘mvnAbsEps’ or
‘mvnRelEps’ is satisfied.
Use of ‘mvnAbsEps’ is deprecated.
The maximum number of major iterations (the option “Major iterations”) for optimization for NPSOL can be specified either by using a numeric value (such as 50, 1000, etc) or by specifying a user-defined function. The user-defined function should accept two arguments as input, the number of parameters and the number of constraints, and return a numeric value as output.
OpenMx options
Calculate Hessian | [Yes | No] | calculate the Hessian explicitly after optimization. |
Standard Errors | [Yes | No] | return standard error estimates from the explicitly calculate hessian. |
Default optimizer | [NPSOL | SLSQP | CSOLNP] | the gradient-descent optimizer to use |
Number of Threads | [0|1|2|...|10|...] | number of threads used for optimization. Default value is taken from the environment variable OMP_NUM_THREADS or, if that is not set, 2. |
Feasibility tolerance | r | the maximum acceptable absolute violations in linear and nonlinear constraints. |
Optimality tolerance | r | the accuracy with which the final iterate approximates a solution to the optimization problem; roughly, the number of reliable significant figures that the fitfunction value should have at the solution. |
Gradient algorithm | see list | finite difference method, either 'forward' or 'central'. |
Gradient iterations | 1:4 | the number of Richardson extrapolation iterations |
Gradient step size | r | amount of change made to free parameters when numerically calculating gradient |
Analytic Gradients | [Yes | No] | should the optimizer use analytic gradients (if available)? |
loglikelihoodScale | i | factor by which the loglikelihood is scaled. |
Parallel diagnostics | [Yes | No] | whether to issue diagnostic messages about use of multiple threads |
Nudge zero starts | [TRUE | FALSE] | Should OpenMx "nudge" starting values of zero to 0.1 at runtime? |
Status OK | character vector | Status codes that are considered to indicate a successful optimization |
Max minutes | numeric | Maximum backend elapsed time, in minutes |
NPSOL-specific options
Nolist | this option suppresses printing of the options | |
Print level | i | the value of i controls the amount of printout produced by the major iterations |
Minor print level | i | the value of i controls the amount of printout produced by the minor iterations |
Print file | i | for i > 0 a full log is sent to the file with logical unit number i. |
Summary file | i | for i > 0 a brief log will be output to file i. |
Function precision | r | a measure of accuracy with which the fitfunction and constraint functions can be computed. |
Infinite bound size | r | if r > 0 defines the "infinite" bound bigbnd. |
Major iterations | i or a function | the maximum number of major iterations before termination. |
Verify level | [-1:3 | Yes | No] | see NPSOL manual. |
Line search tolerance | r | controls the accuracy with which a step is taken. |
Derivative level | [0-3] | see NPSOL manual. |
Hessian | [Yes | No] | return the Hessian (Yes) or the transformed Hessian (No). |
Step Limit | r | maximum change in free parameters at first step of linesearch. |
Checkpointing options
Always Checkpoint | [Yes | No] | whether to checkpoint all models during optimization. |
Checkpoint Directory | path | the directory into which checkpoint files are written. |
Checkpoint Prefix | string | the string prefix to add to all checkpoint filenames. |
Checkpoint Fullpath | path | overrides the directory and prefix (useful to output to /dev/fd/2) |
Checkpoint Units | see list | the type of units for checkpointing: 'minutes', 'iterations', or 'evaluations'. |
Checkpoint Count | i | the number of units between checkpoint intervals. |
Model transformation options
Error Checking | [Yes | No] | whether model consistency checks are performed in the OpenMx front-end |
No Sort Data | character vector of model names for which FIML data sorting is not performed | |
RAM Inverse Optimization | [Yes | No] | whether to enable solve(I - A) optimization |
RAM Max Depth | i | the maximum depth to be used when solve(I - A) optimization is enabled |
Multivariate normal integration parameters
maxOrdinalPerBlock | i | maximum number of ordinal variables to evaluate together |
mvnMaxPointsA | i | base number of integration points |
mvnMaxPointsB | i | number of integration points per ordinal variable |
mvnMaxPointsC | i | number of integration points per squared ordinal variables |
mvnMaxPointsD | i | see details |
mvnMaxPointsE | i | see details |
mvnAbsEps | i | absolute error tolerance |
mvnRelEps | i | relative error tolerance |
Value
If a model is provided, it is returned with the optimizer option either set or cleared. If value is empty, the current value is returned.
References
The OpenMx User's guide can be found at https://openmx.ssri.psu.edu/documentation/.
See Also
See mxModel()
, as almost all uses of mxOption()
are via an mxModel whose options are set or cleared. See mxComputeGradientDescent()
for details on how different optimizers are affected by different options. See as.statusCode for information about the Status OK
option.
Examples
# set the Numbder of Threads (cores to use)
mxOption(key="Number of Threads", value=imxGetNumThreads())
testModel <- mxModel(model = "testModel5") # make a model to use for example
testModel$options # show the model options (none yet)
options()$mxOptions # list all mxOptions (global settings)
testModel <- mxOption(testModel, "Function precision", 1e-5) # set precision
testModel <- mxOption(testModel, "Function precision", NULL) # clear precision
# N.B. This is model-specific precision (defaults to global setting)
# may optimize for speed
# at cost of not getting standard errors
testModel <- mxOption(testModel, "Calculate Hessian", "No")
testModel <- mxOption(testModel, "Standard Errors" , "No")
testModel$options # see the list of options you set