h2o4gpu.gradient_boosting_regressor {h2o4gpu} | R Documentation |
Gradient Boosting Regressor
Description
Gradient Boosting Regressor
Usage
h2o4gpu.gradient_boosting_regressor(loss = "ls", learning_rate = 0.1,
n_estimators = 100L, subsample = 1, criterion = "friedman_mse",
min_samples_split = 2L, min_samples_leaf = 1L,
min_weight_fraction_leaf = 0, max_depth = 3L, min_impurity_decrease = 0,
min_impurity_split = NULL, init = NULL, random_state = NULL,
max_features = "auto", alpha = 0.9, verbose = 0L,
max_leaf_nodes = NULL, warm_start = FALSE, presort = "auto",
colsample_bytree = 1, num_parallel_tree = 1L, tree_method = "gpu_hist",
n_gpus = -1L, predictor = "gpu_predictor", objective = "reg:linear",
booster = "gbtree", n_jobs = 1L, gamma = 0L, min_child_weight = 1L,
max_delta_step = 0L, colsample_bylevel = 1L, reg_alpha = 0L,
reg_lambda = 1L, scale_pos_weight = 1L, base_score = 0.5,
missing = NULL, backend = "h2o4gpu", ...)
Arguments
loss |
loss function to be optimized. 'ls' refers to least squares regression. 'lad' (least absolute deviation) is a highly robust loss function solely based on order information of the input variables. 'huber' is a combination of the two. 'quantile' allows quantile regression (use |
learning_rate |
learning rate shrinks the contribution of each tree by |
n_estimators |
The number of boosting stages to perform. Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance. |
subsample |
The fraction of samples to be used for fitting the individual base learners. If smaller than 1.0 this results in Stochastic Gradient Boosting. |
criterion |
The function to measure the quality of a split. Supported criteria are "friedman_mse" for the mean squared error with improvement score by Friedman, "mse" for mean squared error, and "mae" for the mean absolute error. The default value of "friedman_mse" is generally the best as it can provide a better approximation in some cases. |
min_samples_split |
The minimum number of samples required to split an internal node: |
min_samples_leaf |
The minimum number of samples required to be at a leaf node: |
min_weight_fraction_leaf |
The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided. |
max_depth |
maximum depth of the individual regression estimators. The maximum depth limits the number of nodes in the tree. Tune this parameter for best performance; the best value depends on the interaction of the input variables. |
min_impurity_decrease |
A node will be split if this split induces a decrease of the impurity greater than or equal to this value. |
min_impurity_split |
Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf. |
init |
An estimator object that is used to compute the initial predictions. |
random_state |
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If NULL, the random number generator is the RandomState instance used by |
max_features |
The number of features to consider when looking for the best split: |
alpha |
The alpha-quantile of the huber loss function and the quantile loss function. Only if |
verbose |
Enable verbose output. If 1 then it prints progress and performance once in a while (the more trees the lower the frequency). If greater than 1 then it prints progress and performance for every tree. |
max_leaf_nodes |
Grow trees with |
warm_start |
When set to |
presort |
Whether to presort the data to speed up the finding of best splits in fitting. Auto mode by default will use presorting on dense data and default to normal sorting on sparse data. Setting presort to true on sparse data will raise an error. |
colsample_bytree |
Subsample ratio of columns when constructing each tree. |
num_parallel_tree |
Number of trees to grow per round |
tree_method |
The tree construction algorithm used in XGBoost Distributed and external memory version only support approximate algorithm. Choices: ‘auto’, ‘exact’, ‘approx’, ‘hist’, ‘gpu_exact’, ‘gpu_hist’ ‘auto’: Use heuristic to choose faster one. - For small to medium dataset, exact greedy will be used. - For very large-dataset, approximate algorithm will be chosen. - Because old behavior is always use exact greedy in single machine, - user will get a message when approximate algorithm is chosen to notify this choice. ‘exact’: Exact greedy algorithm. ‘approx’: Approximate greedy algorithm using sketching and histogram. ‘hist’: Fast histogram optimized approximate greedy algorithm. It uses some performance improvements such as bins caching. ‘gpu_exact’: GPU implementation of exact algorithm. ‘gpu_hist’: GPU implementation of hist algorithm. |
n_gpus |
Number of gpu's to use in GradientBoostingRegressor solver. Default is -1. |
predictor |
The type of predictor algorithm to use. Provides the same results but allows the use of GPU or CPU. - 'cpu_predictor': Multicore CPU prediction algorithm. - 'gpu_predictor': Prediction using GPU. Default for 'gpu_exact' and 'gpu_hist' tree method. |
objective |
Specify the learning task and the corresponding learning objective or a custom objective function to be used Note: A custom objective function can be provided for the objective parameter. In this case, it should have the signature objective(y_true, y_pred) -> grad, hess: |
booster |
Specify which booster to use: gbtree, gblinear or dart. |
n_jobs |
Number of parallel threads used to run xgboost. |
gamma |
Minimum loss reduction required to make a further partition on a leaf node of the tree. |
min_child_weight |
Minimum sum of instance weight(hessian) needed in a child. |
max_delta_step |
Maximum delta step we allow each tree’s weight estimation to be. |
colsample_bylevel |
Subsample ratio of columns for each split, in each level. |
reg_alpha |
L1 regularization term on weights |
reg_lambda |
L2 regularization term on weights |
scale_pos_weight |
Balancing of positive and negative weights. |
base_score |
The initial prediction score of all instances, global bias. |
missing |
Value in the data which needs to be present as a missing value. If NULL, defaults to np.nan. |
backend |
Which backend to use. Options are 'auto', 'sklearn', 'h2o4gpu'. Saves as attribute for actual backend used. |
... |
Other parameters for XGBoost object. Full documentation of parameters can be found here: https://github.com/dmlc/xgboost/blob/master/doc/parameter.md |