EmpiricalRiskMinimizationDP.KST {DPpack} | R Documentation |
Privacy-preserving Empirical Risk Minimization for Regression
Description
This class implements differentially private empirical risk
minimization using the objective perturbation technique
(Kifer et al. 2012). It is intended to be a framework for
building more specific models via inheritance. See
LinearRegressionDP
for an example of this type of structure.
Details
To use this class for empirical risk minimization, first use the
new
method to construct an object of this class with the desired
function values and hyperparameters. After constructing the object, the
fit
method can be applied with a provided dataset and data bounds to
fit the model. In fitting, the model stores a vector of coefficients
coeff
which satisfy differential privacy. These can be released
directly, or used in conjunction with the predict
method to privately
predict the outcomes of new datapoints.
Note that in order to guarantee differential privacy for the empirical risk
minimization model, certain constraints must be satisfied for the values
used to construct the object, as well as for the data used to fit.
Specifically, the following constraints must be met. Let l
represent
the loss function for an individual dataset row x and output value y and
L
represent the average loss over all rows and output values. First,
L
must be convex with a continuous Hessian. Second, the l2-norm of the
gradient of l
must be bounded above by some constant zeta for all
possible input values in the domain. Third, for all possible inputs to
l
, the Hessian of l
must be of rank at most one and its
Eigenvalues must be bounded above by some constant lambda. Fourth, the
regularizer must be convex. Finally, the provided domain of l
must be
a closed convex subset of the set of all real-valued vectors of dimension p,
where p is the number of columns of X. Note that because of this, a bias
term cannot be included without appropriate scaling/preprocessing of the
dataset. To ensure privacy, the add.bias argument in the fit
and
predict
methods should only be utilized in subclasses within this
package where appropriate preprocessing is implemented, not in this class.
Public fields
mapXy
Map function of the form
mapXy(X, coeff)
mapping input data matrixX
and coefficient vector or matrixcoeff
to output labelsy
.mapXy.gr
Function representing the gradient of the map function with respect to the values in
coeff
and of the formmapXy.gr(X, coeff)
, whereX
is a matrix andcoeff
is a matrix or numeric vector.loss
Loss function of the form
loss(y.hat, y)
, wherey.hat
andy
are matrices.loss.gr
Function representing the gradient of the loss function with respect to
y.hat
and of the formloss.gr(y.hat, y)
, wherey.hat
andy
are matrices.regularizer
Regularization function of the form
regularizer(coeff)
, wherecoeff
is a vector or matrix.regularizer.gr
Function representing the gradient of the regularization function with respect to
coeff
and of the formregularizer.gr(coeff)
.gamma
Nonnegative real number representing the regularization constant.
eps
Positive real number defining the epsilon privacy budget. If set to Inf, runs algorithm without differential privacy.
delta
Nonnegative real number defining the delta privacy parameter. If 0, reduces to pure eps-DP.
domain
List of constraint and jacobian functions representing the constraints on the search space for the objective perturbation algorithm used in Kifer et al. (2012).
zeta
Positive real number denoting the upper bound on the l2-norm value of the gradient of the loss function, as required to ensure differential privacy.
lambda
Positive real number corresponding to the upper bound of the Eigenvalues of the Hessian of the loss function for all possible inputs.
coeff
Numeric vector of coefficients for the model.
Methods
Public methods
Method new()
Create a new EmpiricalRiskMinimizationDP.KST
object.
Usage
EmpiricalRiskMinimizationDP.KST$new( mapXy, loss, regularizer, eps, delta, domain, zeta, lambda, gamma, mapXy.gr = NULL, loss.gr = NULL, regularizer.gr = NULL )
Arguments
mapXy
Map function of the form
mapXy(X, coeff)
mapping input data matrixX
and coefficient vector or matrixcoeff
to output labelsy
. Should return a column matrix of predicted labels for each row ofX
. SeemapXy.linear
for an example.loss
Loss function of the form
loss(y.hat, y)
, wherey.hat
andy
are matrices. Should be defined such that it returns a matrix of loss values for each element ofy.hat
andy
. Seeloss.squared.error
for an example. This function must be convex and the l2-norm of its gradient must be bounded above by zeta for some constant zeta for all possible inputs within the given domain. Additionally, for all possible inputs within the given domain, the Hessian of the loss function must be of rank at most one and its Eigenvalues must be bounded above by some constant lambda.regularizer
String or regularization function. If a string, must be 'l2', indicating to use l2 regularization. If a function, must have form
regularizer(coeff)
, wherecoeff
is a vector or matrix, and return the value of the regularizer atcoeff
. Seeregularizer.l2
for an example. Additionally, in order to ensure differential privacy, the function must be convex.eps
Positive real number defining the epsilon privacy budget. If set to Inf, runs algorithm without differential privacy.
delta
Nonnegative real number defining the delta privacy parameter. If 0, reduces to pure eps-DP.
domain
List of functions representing the constraints on the search space for the objective perturbation algorithm. Must contain two functions, labeled "constraints" and "jacobian", respectively. The "constraints" function accepts a vector of coefficients from the search space and returns a value such that the value is nonpositive if and only if the input coefficient vector is within the constrained search space. The "jacobian" function also accepts a vector of coefficients and returns the Jacobian of the constraint function. For example, in linear regression, the square of the l2-norm of the coefficient vector
\theta
is assumed to be bounded above by p, where p is the length of\theta
(Kifer et al. 2012). So, domain could be defined asdomain <- list("constraints"=function(coeff) coeff%*%coeff-length(coeff), "jacobian"=function(coeff) 2*coeff)
.zeta
Positive real number denoting the upper bound on the l2-norm value of the gradient of the loss function, as required to ensure differential privacy.
lambda
Positive real number corresponding to the upper bound of the Eigenvalues of the Hessian of the loss function for all possible inputs.
gamma
Nonnegative real number representing the regularization constant.
mapXy.gr
Optional function representing the gradient of the map function with respect to the values in
coeff
. If given, must be of the formmapXy.gr(X, coeff)
, whereX
is a matrix andcoeff
is a matrix or numeric vector. Should be defined such that the ith row of the output represents the gradient with respect to the ith coefficient. SeemapXy.gr.linear
for an example. If not given, non-gradient based optimization methods are used to compute the coefficient values in fitting the model.loss.gr
Optional function representing the gradient of the loss function with respect to
y.hat
and of the formloss.gr(y.hat, y)
, wherey.hat
andy
are matrices. Should be defined such that the ith row of the output represents the gradient of the loss function at the ith set of input values. Seeloss.gr.squared.error
for an example. If not given, non-gradient based optimization methods are used to compute the coefficient values in fitting the model.regularizer.gr
Optional function representing the gradient of the regularization function with respect to
coeff
and of the formregularizer.gr(coeff)
. Should return a vector. Seeregularizer.gr.l2
for an example. Ifregularizer
is given as a string, this value is ignored. If not given andregularizer
is a function, non-gradient based optimization methods are used to compute the coefficient values in fitting the model.
Returns
A new EmpiricalRiskMinimizationDP.KST object.
Method fit()
Fit the differentially private emprirical risk minimization
model. The function runs the objective perturbation algorithm
(Kifer et al. 2012) to generate an objective function. A
numerical optimization method is then run to find optimal coefficients
for fitting the model given the training data and hyperparameters. The
nloptr
function is used. If mapXy.gr, loss.gr, and
regularizer.gr are all given in the construction of the object, the
gradient of the objective function and the Jacobian of the constraint
function are utilized for the algorithm, and the NLOPT_LD_MMA method is
used. If one or more of these gradient functions are not given, the
NLOPT_LN_COBYLA method is used. The resulting privacy-preserving
coefficients are stored in coeff.
Usage
EmpiricalRiskMinimizationDP.KST$fit( X, y, upper.bounds, lower.bounds, add.bias = FALSE )
Arguments
X
Dataframe of data to be fit.
y
Vector or matrix of true values for each row of
X
.upper.bounds
Numeric vector of length
ncol(X)+1
giving upper bounds on the values in each column ofX
and the values ofy
. The last value in the vector is assumed to be the upper bound ony
, while the firstncol(X)
values are assumed to be in the same order as the corresponding columns ofX
. Any value in the columns ofX
and iny
larger than the corresponding upper bound is clipped at the bound.lower.bounds
Numeric vector of length
ncol(X)+1
giving lower bounds on the values in each column ofX
and the values ofy
. The last value in the vector is assumed to be the lower bound ony
, while the firstncol(X)
values are assumed to be in the same order as the corresponding columns ofX
. Any value in the columns ofX
and iny
larger than the corresponding lower bound is clipped at the bound.add.bias
Boolean indicating whether to add a bias term to
X
. Defaults to FALSE.
Method predict()
Predict y values for given X using the fitted coefficients.
Usage
EmpiricalRiskMinimizationDP.KST$predict(X, add.bias = FALSE)
Arguments
X
Dataframe of data on which to make predictions. Must be of same form as
X
used to fit coefficients.add.bias
Boolean indicating whether to add a bias term to
X
. Defaults to FALSE. If add.bias was set to TRUE when fitting the coefficients, add.bias should be set to TRUE for predictions.
Returns
Matrix of predicted y values corresponding to each row of X.
Method clone()
The objects of this class are cloneable with this method.
Usage
EmpiricalRiskMinimizationDP.KST$clone(deep = FALSE)
Arguments
deep
Whether to make a deep clone.
References
Kifer D, Smith A, Thakurta A (2012). “Private Convex Empirical Risk Minimization and High-dimensional Regression.” In Mannor S, Srebro N, Williamson RC (eds.), Proceedings of the 25th Annual Conference on Learning Theory, volume 23 of Proceedings of Machine Learning Research, 25.1–25.40. https://proceedings.mlr.press/v23/kifer12.html.
Examples
# Build example dataset
n <- 500
X <- data.frame(X=seq(-1,1,length.out = n))
true.theta <- c(-.3,.5) # First element is bias term
p <- length(true.theta)
y <- true.theta[1] + as.matrix(X)%*%true.theta[2:p] + stats::rnorm(n=n,sd=.1)
# Construct object for linear regression
mapXy <- function(X, coeff) X%*%coeff
loss <- function(y.hat, y) (y.hat-y)^2/2
regularizer <- 'l2' # Alternatively, function(coeff) coeff%*%coeff/2
eps <- 1
delta <- 1
domain <- list("constraints"=function(coeff) coeff%*%coeff-length(coeff),
"jacobian"=function(coeff) 2*coeff)
# Set p to be the number of predictors desired including intercept term (length of coeff)
zeta <- 2*p^(3/2) # Proper bound for linear regression
lambda <- p # Proper bound for linear regression
gamma <- 1
mapXy.gr <- function(X, coeff) t(X)
loss.gr <- function(y.hat, y) y.hat-y
regularizer.gr <- function(coeff) coeff
ermdp <- EmpiricalRiskMinimizationDP.KST$new(mapXy, loss, 'l2', eps, delta,
domain, zeta, lambda,
gamma, mapXy.gr, loss.gr,
regularizer.gr)
# Fit with data
# We must assume y is a matrix with values between -p and p (-2 and 2
# for this example)
upper.bounds <- c(1, 2) # Bounds for X and y
lower.bounds <- c(-1,-2) # Bounds for X and y
ermdp$fit(X, y, upper.bounds, lower.bounds, add.bias=TRUE)
ermdp$coeff # Gets private coefficients
# Predict new data points
# Build a test dataset
Xtest <- data.frame(X=c(-.5, -.25, .1, .4))
predicted.y <- ermdp$predict(Xtest, add.bias=TRUE)