cuda_ml_logistic_reg {cuda.ml} | R Documentation |

## Train a logistic regression model.

### Description

Train a logistic regression model using Quasi-Newton (QN) algorithms (i.e., Orthant-Wise Limited Memory Quasi-Newton (OWL-QN) if there is L1 regularization, Limited Memory BFGS (L-BFGS) otherwise).

### Usage

```
cuda_ml_logistic_reg(x, ...)
## Default S3 method:
cuda_ml_logistic_reg(x, ...)
## S3 method for class 'data.frame'
cuda_ml_logistic_reg(
x,
y,
fit_intercept = TRUE,
penalty = c("l2", "l1", "elasticnet", "none"),
tol = 1e-04,
C = 1,
class_weight = NULL,
sample_weight = NULL,
max_iters = 1000L,
linesearch_max_iters = 50L,
l1_ratio = NULL,
...
)
## S3 method for class 'matrix'
cuda_ml_logistic_reg(
x,
y,
fit_intercept = TRUE,
penalty = c("l2", "l1", "elasticnet", "none"),
tol = 1e-04,
C = 1,
class_weight = NULL,
sample_weight = NULL,
max_iters = 1000L,
linesearch_max_iters = 50L,
l1_ratio = NULL,
...
)
## S3 method for class 'formula'
cuda_ml_logistic_reg(
formula,
data,
fit_intercept = TRUE,
penalty = c("l2", "l1", "elasticnet", "none"),
tol = 1e-04,
C = 1,
class_weight = NULL,
sample_weight = NULL,
max_iters = 1000L,
linesearch_max_iters = 50L,
l1_ratio = NULL,
...
)
## S3 method for class 'recipe'
cuda_ml_logistic_reg(
x,
data,
fit_intercept = TRUE,
penalty = c("l2", "l1", "elasticnet", "none"),
tol = 1e-04,
C = 1,
class_weight = NULL,
sample_weight = NULL,
max_iters = 1000L,
linesearch_max_iters = 50L,
l1_ratio = NULL,
...
)
```

### Arguments

`x` |
Depending on the context: * A __data frame__ of predictors. * A __matrix__ of predictors. * A __recipe__ specifying a set of preprocessing steps * created from [recipes::recipe()]. * A __formula__ specifying the predictors and the outcome. |

`...` |
Optional arguments; currently unused. |

`y` |
A numeric vector (for regression) or factor (for classification) of desired responses. |

`fit_intercept` |
If TRUE, then the model tries to correct for the global mean of the response variable. If FALSE, then the model expects data to be centered. Default: TRUE. |

`penalty` |
The penalty type, must be one of "none", "l1", "l2", "elasticnet". If "none" or "l2" is selected, then L-BFGS solver will be used. If "l1" is selected, solver OWL-QN will be used. If "elasticnet" is selected, OWL-QN will be used if l1_ratio > 0, otherwise L-BFGS will be used. Default: "l2". |

`tol` |
Tolerance for stopping criteria. Default: 1e-4. |

`C` |
Inverse of regularization strength; must be a positive float. Default: 1.0. |

`class_weight` |
If |

`sample_weight` |
Array of weights assigned to individual samples.
If |

`max_iters` |
Maximum number of solver iterations. Default: 1000L. |

`linesearch_max_iters` |
Max number of linesearch iterations per outer iteration used in the LBFGS- and OWL- QN solvers. Default: 50L. |

`l1_ratio` |
The Elastic-Net mixing parameter, must |

`formula` |
A formula specifying the outcome terms on the left-hand side, and the predictor terms on the right-hand side. |

`data` |
When a __recipe__ or __formula__ is used, |

### Examples

```
library(cuda.ml)
X <- scale(as.matrix(iris[names(iris) != "Species"]))
y <- iris$Species
model <- cuda_ml_logistic_reg(X, y, max_iters = 100)
predictions <- predict(model, X)
# NOTE: if we were only performing binary classifications (e.g., by having
# `iris_data <- iris %>% mutate(Species = (Species == "setosa"))`), then the
# above would be conceptually equivalent to the following:
#
# iris_data <- iris %>% mutate(Species = (Species == "setosa"))
# model <- glm(
# Species ~ ., data = iris_data, family = binomial(link = "logit"),
# control = glm.control(epsilon = 1e-8, maxit = 100)
# )
#
# predict(model, iris_data, type = "response")
```

*cuda.ml*version 0.3.2 Index]