h2o4gpu.elastic_net_classifier {h2o4gpu} | R Documentation |
Elastic Net Classifier
Description
Elastic Net Classifier
Usage
h2o4gpu.elastic_net_classifier(alpha = 1, l1_ratio = 0.5,
fit_intercept = TRUE, normalize = FALSE, precompute = FALSE,
max_iter = 5000L, copy_X = TRUE, tol = 0.01, warm_start = FALSE,
positive = FALSE, random_state = NULL, selection = "cyclic",
n_gpus = -1L, lambda_stop_early = TRUE, glm_stop_early = TRUE,
glm_stop_early_error_fraction = 1, verbose = FALSE, n_threads = NULL,
gpu_id = 0L, lambda_min_ratio = 1e-07, n_lambdas = 100L, n_folds = 5L,
tol_seek_factor = 0.1, store_full_path = 0L, lambda_max = NULL,
lambdas = NULL, double_precision = NULL, order = NULL,
backend = "h2o4gpu")
Arguments
alpha |
Constant that multiplies the penalty terms. Defaults to 1.0. See the notes for the exact mathematical meaning of this parameter. |
l1_ratio |
The ElasticNetSklearn mixing parameter, with |
fit_intercept |
Whether the intercept should be estimated or not. If |
normalize |
This parameter is ignored when |
precompute |
Whether to use a precomputed Gram matrix to speed up calculations. The Gram matrix can also be passed as argument. For sparse input this option is always |
max_iter |
The maximum number of iterations |
copy_X |
If |
tol |
The tolerance for the optimization: if the updates are smaller than |
warm_start |
When set to |
positive |
When set to |
random_state |
The seed of the pseudo random number generator that selects a random feature to update. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If NULL, the random number generator is the RandomState instance used by |
selection |
If set to 'random', a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to 'random') often leads to significantly faster convergence especially when tol is higher than 1e-4. |
n_gpus |
Number of gpu's to use in GLM solver. |
lambda_stop_early |
Stop early when there is no more relative improvement on train or validation. |
glm_stop_early |
Stop early when there is no more relative improvement in the primary and dual residuals for ADMM. |
glm_stop_early_error_fraction |
Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much). |
verbose |
Print verbose information to the console if set to > 0. |
n_threads |
Number of threads to use in the gpu. Each thread is an independent model builder. |
gpu_id |
ID of the GPU on which the algorithm should run. |
lambda_min_ratio |
Minimum lambda ratio to maximum lambda, used in lambda search. |
n_lambdas |
Number of lambdas to be used in a search. |
n_folds |
Number of cross validation folds. |
tol_seek_factor |
Factor of tolerance to seek once below null model accuracy. Default is 1E-1, so seeks tolerance of 1E-3 once below null model accuracy for tol=1E-2. |
store_full_path |
Whether to store full solution for all alphas and lambdas. If 1, then during predict will compute best and full predictions. |
lambda_max |
Maximum Lambda value to use. Default is NULL, and then internally compute standard maximum |
lambdas |
overrides n_lambdas, lambda_max, and lambda_min_ratio. |
double_precision |
Internally set unless using _ptr methods. Value can either be 0 (float32) or 1(float64) |
order |
Order of data. Default is NULL, and internally determined (unless using _ptr methods) whether row 'r' or column 'c' major order. |
backend |
Which backend to use. Options are 'auto', 'sklearn', 'h2o4gpu'. Saves as attribute for actual backend used. |