el2.test.wtm {emplik2} | R Documentation |
Computes maximum-likelihood probability jumps for multiple mean-type hypotheses, based on two independent uncensored samples
Description
This function computes the maximum-likelihood probability jumps for multiple mean-type hypotheses, based on two samples that are independent, uncensored, and weighted. The target function for the maximization is the constrained log(empirical likelihood) which can be expressed as,
\sum_{dx_i=1} wx_i \log{\mu_i} + \sum_{dy_j=1} wy_j \log{\nu_j} - \eta ( 1 -\sum_{dx_i=1} \mu_i )
- \delta ( 1 -\sum_{dy_j=1} \nu_j ) - \lambda ( \mu^T H_1 \nu, \ldots , \mu^T H_p \nu )^T
where the variables are defined as follows:
x
is a vector of uncensored data for the first sample
y
is a vector of uncensored data for the second sample
wx
is a vector of estimated weights for the first sample
wy
is a vector of estimated weights for the second sample
\mu
is a vector of estimated probability jumps for the first sample
\nu
is a vector of estimated probability jumps for the second sample
H_k = [ g_k(x_i, y_j) - mean_k ], k=1, \ldots , p
, where g_k(x,y)
is a user-chosen function
H = [H_1, ... , H_p]
(used as argument in el.cen.EMm
function, which calls el2.test.wtm
)
mean
is a vector of length p
of hypothesized means, such that mean_k
is the
hypothesized value of E(g_k(x,y))
E
indicates “expected value”
Usage
el2.test.wtm(xd1,yd1,wxd1new, wyd1new, muvec, nuvec, Hu, Hmu, Hnu, p, mean, maxit=15)
Arguments
xd1 |
a vector of uncensored data for the first sample |
yd1 |
a vector of uncensored data for the second sample |
wxd1new |
a vector of estimated weights for |
wyd1new |
a vector of estimated weights for |
muvec |
a vector of estimated probability jumps for |
nuvec |
a vector of estimated probability jumps for |
Hu |
|
Hmu |
a matrix, whose calculation is shown in the example below |
Hnu |
a matrix, whose calculation is shown in the example below |
p |
the number of hypotheses |
mean |
a vector of hypothesized values of |
maxit |
a positive integer used to control the maximum number of iterations in the Newton-Raphson algorithm; default is 10 |
Details
This function is called by el2.cen.EMm
. It is listed here because the user may find it useful elsewhere.
The value of mean_k
should be chosen between the maximum and minimum values of g_k(xd1_i,yd1_j)
;
otherwise there may be no distributions for xd1
and yd1
that will satisfy the the mean-type hypothesis. If
mean_k
is inside this interval, but the convergence is still not satisfactory, then the value of
mean_k
should be moved closer to the NPMLE for E(g(xd1,yd1))
. (The NPMLE itself should always
be a feasible value for mean_k
.) The calculations for this function are derived in Owen (2001).
Value
el2.test.wtm
returns a list of values as follows:
constmat |
a matrix of row vectors, where the |
lam |
the vector of Lagrangian mulipliers used in the calculations |
muvec1 |
the vector of probability jumps for the first sample that maximize the weighted empirical likelihood |
nuvec1 |
the vector of probability jumps for the second sample that maximize the weighted empirical likelihood |
mean |
the vector of hypothesized means |
Author(s)
William H. Barton <bbarton@lexmark.com>
References
Owen, A.B. (2001). Empirical Likelihood
. Chapman and Hall/CRC, Boca Raton, pp.223-227.
Examples
#Ho1: P(X>Y) = 0.5
#Ho2: P(X>1060) = 0.5
#g1(x) = I[x > y]
#g2(y) = I[x > 1060]
mean<-c(0.5,0.5)
p<-2
xd1<-c(10,85,209,273,279,324,391,566,852,881,895,954,1101,1393,1669,1823,1941)
nx1=length(xd1)
yd1<-c(21,38,39,51,77,185,524,610,612,677,798,899,946,1010,1074,1147,1154,1329,1484,1602,1952)
ny1=length(yd1)
wxd1new<-c(2.267983, 1.123600, 1.121683, 1.121683, 1.121683, 1.121683, 1.121683,
1.000000, 1.000000, 1.000000, 1.000000, 1.000000, 1.000000, 1.261740, 2.912753,
2.912753, 2.912753)
muvec<-c(0.08835785, 0.04075290, 0.04012084, 0.04012084, 0.04012084, 0.04012084,
0.04012084, 0.03538020, 0.03389263, 0.03389263, 0.03389263, 0.03322693, 0.04901516,
0.05902008, 0.13065491, 0.13065491, 0.13065491)
wyd1new<-c(1.431653, 1.431653, 1.431653, 1.431653, 1.431653, 1.438453, 1.079955, 1.080832,
1.080832, 1.080832, 1.080832, 1.000000, 1.000000, 1.000000, 1.000000, 1.000000, 1.000000,
1.222883, 1.227865, 1.739636, 5.809616)
nuvec<-c(0.04249966, 0.04249966, 0.04249966, 0.04249966, 0.04249966, 0.04316922, 0.03425722,
0.03463312, 0.03463312, 0.03463312, 0.03463312, 0.03300598, 0.03300598, 0.03333333,
0.03333333, 0.03382827, 0.03382827, 0.04136800, 0.04229270, 0.05992020, 0.22762676)
H1u<-matrix(NA,nrow=nx1,ncol=ny1)
H2u<-matrix(NA,nrow=nx1,ncol=ny1)
for (i in 1:nx1) {
for (j in 1:ny1) {
H1u[i,j]<-(xd1[i]>yd1[j])
H2u[i,j]<-(xd1[i]>1060) } }
Hu=matrix(c(H1u,H2u),nrow=nx1,ncol=p*ny1)
for (k in 1:p) {
M1 <- matrix(mean[k], nrow=nx1, ncol=ny1)
Hu[,((k-1)*ny1+1):(k*ny1)] <- Hu[,((k-1)*ny1+1):(k*ny1)] - M1}
Hmu <- matrix(NA,nrow=p, ncol=ny1*nx1)
Hnu <- matrix(NA,nrow=p, ncol=ny1*nx1)
for (i in 1:p) {
for (k in 1:nx1) {
Hmu[i, ((k-1)*ny1+1):(k*ny1)] <- Hu[k,((i-1)*ny1+1):(i*ny1)] } }
for (i in 1:p) {
for (k in 1:ny1) {
Hnu[i,((k-1)*nx1+1):(k*nx1)] <- Hu[(1:nx1),(i-1)*ny1+k]} }
el2.test.wtm(xd1,yd1,wxd1new, wyd1new, muvec, nuvec, Hu, Hmu,
Hnu, p, mean, maxit=10)
#muvec1
# [1] 0.08835789 0.04075290 0.04012083 0.04012083 0.04012083 0.04012083 0.04012083
# [8] 0.03538021 0.03389264 0.03389264 0.03389264 0.03322693 0.04901513 0.05902002
# [15] 0.13065495 0.13065495 0.13065495
#nuvec1
# [1] 0.04249967 0.04249967 0.04249967 0.04249967 0.04249967 0.04316920 0.03425722
# [8] 0.03463310 0.03463310 0.03463310 0.03463310 0.03300597 0.03300597 0.03333333
# [15] 0.03333333 0.03382827 0.03382827 0.04136801 0.04229269 0.05992018 0.22762677
# $lam
# [,1] [,2]
# [1,] 8.9549 -10.29119