Cluster_Gauss_Newton_method {CGNM} | R Documentation |

Find multiple minimisers of the nonlinear least squares problem.

`argmin_x ||f(x)-y*||`

where

f: nonlinear function (e.g., mathematical model)

y*: target vector (e.g., observed data to fit the mathematical model)

x: variable of the nonlinear function that we aim to find the values that minimize (minimizers) the differences between the nonlinear function and target vector (e.g., model parameter)

Parameter estimation problems of mathematical models can often be formulated as nonlinear least squares problems. In this context f can be thought at a model, x is the parameter, and y* is the observation. CGNM iteratively estimates the minimizer of the nonlinear least squares problem from various initial estimates hence finds multiple minimizers. Full detail of the algorithm and comparison with conventional method is available in the following publication, also please cite this publication when this algorithm is used in your research: Aoki et al. (2020) <doi.org/10.1007/s11081-020-09571-2>. Cluster Gaussâ€“Newton method. Optimization and Engineering, 1-31. As illustrated in this paper, CGNM is faster and more robust compared to repeatedly applying the conventional optimization/nonlinear least squares algorithm from various initial estimates. In addition, CGNM can realize this speed assuming the nonlinear function to be a black-box function (e.g. does not use things like adjoint equation of a system of ODE as the function does not have to be based on a system of ODEs.).

```
Cluster_Gauss_Newton_method(
nonlinearFunction,
targetVector,
initial_lowerRange,
initial_upperRange,
lowerBound = NA,
upperBound = NA,
ParameterNames = NA,
stayIn_initialRange = FALSE,
num_minimizersToFind = 250,
num_iteration = 25,
saveLog = TRUE,
runName = "",
textMemo = "",
algorithmParameter_initialLambda = 1,
algorithmParameter_gamma = 2,
algorithmVersion = 3,
initialIterateMatrix = NA,
targetMatrix = NA,
keepInitialDistribution = NA
)
```

`nonlinearFunction` |
(required input) |

`targetVector` |
(required input) |

`initial_lowerRange` |
(required input) |

`initial_upperRange` |
(required input) |

`lowerBound` |
(default: NA) |

`upperBound` |
(default: NA) |

`ParameterNames` |
(default: NA) |

`stayIn_initialRange` |
(default: FALSE) |

`num_minimizersToFind` |
(default: 250) |

`num_iteration` |
(default: 25) |

`saveLog` |
(default: TRUE) |

`runName` |
(default: "") |

`textMemo` |
(default: "") |

`algorithmParameter_initialLambda` |
(default: 1) |

`algorithmParameter_gamma` |
(default: 2) |

`algorithmVersion` |
(default: 3.0) |

`initialIterateMatrix` |
(default: NA) |

`targetMatrix` |
(default: NA) |

`keepInitialDistribution` |
(default: NA) |

list of a matrix X, Y,residual_history and initialX, as well as a list runSetting

X:

*a num_minimizersToFind by n matrix*which stores the approximate minimizers of the nonlinear least squares in each row. In the context of model fitting they are**the estimated parameter sets**.Y:

*a num_minimizersToFind by m matrix*which stores the nonlinearFunction evaluated at the corresponding approximate minimizers in matrix X above. In the context of model fitting each row corresponds to**the model simulations**.residual_history:

*a num_iteration by num_minimizersToFind matrix*storing sum of squares residual for all iterations.initialX:

*a num_minimizersToFind by n matrix*which stores the set of initial iterates.runSetting: a list containing all the input variables to Cluster_Gauss_Newton_method (i.e., nonlinearFunction, targetVector, initial_lowerRange, initial_upperRange ,algorithmParameter_initialLambda, algorithmParameter_gamma, num_minimizersToFind, num_iteration, saveLog, runName, textMemo).

```
##lip-flop kinetics (an example known to have two distinct solutions)
model_analytic_function=function(x){
observation_time=c(0.1,0.2,0.4,0.6,1,2,3,6,12)
Dose=1000
F=1
ka=x[1]
V1=x[2]
CL_2=x[3]
t=observation_time
Cp=ka*F*Dose/(V1*(ka-CL_2/V1))*(exp(-CL_2/V1*t)-exp(-ka*t))
log10(Cp)
}
observation=log10(c(4.91, 8.65, 12.4, 18.7, 24.3, 24.5, 18.4, 4.66, 0.238))
CGNM_result=Cluster_Gauss_Newton_method(
nonlinearFunction=model_analytic_function,
targetVector = observation, num_iteration = 10, num_minimizersToFind = 100,
initial_lowerRange = c(0.1,0.1,0.1), initial_upperRange = c(10,10,10),
saveLog = FALSE)
acceptedApproximateMinimizers(CGNM_result)
## Not run:
library(RxODE)
model_text="
d/dt(X_1)=-ka*X_1
d/dt(C_2)=(ka*X_1-CL_2*C_2)/V1"
model=RxODE(model_text)
#define nonlinearFunction
model_function=function(x){
observation_time=c(0.1,0.2,0.4,0.6,1,2,3,6,12)
theta <- c(ka=x[1],V1=x[2],CL_2=x[3])
ev <- eventTable()
ev$add.dosing(dose = 1000, start.time =0)
ev$add.sampling(observation_time)
odeSol=model$solve(theta, ev)
log10(odeSol[,"C_2"])
}
observation=log10(c(4.91, 8.65, 12.4, 18.7, 24.3, 24.5, 18.4, 4.66, 0.238))
CGNM_result=Cluster_Gauss_Newton_method(nonlinearFunction=model_function,
targetVector = observation, saveLog = FALSE,
initial_lowerRange = c(0.1,0.1,0.1),initial_upperRange = c(10,10,10))
## End(Not run)
```

[Package *CGNM* version 0.6.5 Index]