Shrinkage Estimation Of P(Y

DOI : 10.17577/IJERTV2IS60911

Download Full-Text PDF Cite this Publication B.N.Pandey, Nidhi Dwivedi, 2013, Shrinkage Estimation Of P(Y
  • Open Access
  • Total Downloads : 213
  • Authors : B.N.Pandey, Nidhi Dwivedi
  • Paper ID : IJERTV2IS60911
  • Volume & Issue : Volume 02, Issue 06 (June 2013)
  • Published (First Online): 24-06-2013
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License
  • Text Only Version

    Shrinkage Estimation Of P(Y

    B. N. Pandey and Nidhi Dwivedi* Department of Statistics,

    Banaras Hindu University, India

    Abstract

    We consider the problem of estimating R=P(Y<X) where X and Y have independent Weibull distributions with shape parameter , but with different scale parameters 1 and 2 respectively. Assuming that there is a prior guess or estimate R0, we develop various shrinkage estimators of R that incorporate this prior information. The performance of the new estimators is investigated and compared with the maximum likelihood estimator using Monte Carlo methods. It is found that some of these estimators are very successful in taking advantage of the prior estimate available. Recommendations concerning the use of these estimators are presented.

    1. incorporates this information. Those estimators are then called shrinkage estimators as introduced by Thompson (1968). Balkizi and Dayyeh (2003) discussed different shrinkage estimators of R when X and Y are exponential.

      In this article, we shall propose some shrinkage estimators for R when X and Y follows Weibull distribution, in Sec. 2. A Monte Carlo study to investigate the behaviour of these estimators is described in Sec. 3. Results and conclusions are given in the final section.

    2. In this study, X and Y have independent Weibull distributions with shape parameter , but with different scale parameters 1 and 2 respectively, that is

      , = (1)exp( ), x>0;

      The problem of making inference about R=P(Y<X) has

      1

      1

      1

      received a considerable attention in literature. This

      , = (1) exp , > 0.

      2

      2

      2

      problem arises naturally in the context of mechanical reliability of a system with strength X and stress Y. The system fails any time its strength is exceeded by the stress applied to it.

      Another interpretation of R is that it measures the effect of the treatment when X is the response for a control

      Here we assumed the shape parameter to be known. Let X1, . . . ,Xn1 be a random sample for X and Y1, . . .

      ,Yn2 be a random sample for Y. The parameter R we want to estimate is R = P [ Y < X] = 1 . The

      1+2

      maximum likelihood estimator of R can be shown to be

      group and Y is for the treatment group. Various

      1

      2

      = 1 , where 1 = =1 and 2 = =1 . Now

      versions of this problem have been discussed in

      1+ 2

      1

      2

      literature: Enis and Geisser(1971) discussed Bayesian estimation of R when X and Y are exponential. Awad et al. (1981), proposed three estimators of R when X and Y have a bivariate exponential distribution. Tong (1974) derived the MVUE of R where X and Y are exponential. Johnson (1975) gave a correction to the results in Tong (1974). Some other aspects of inference about R are given in AL-Hussaini et al. (1997). In some applications, an experimenter often possesses some knowledge of the experimental conditions based on the behaviour of the system under consideration, or from past experience or some extraneous source, and is thus in position to give an educated guess or an initial estimate of the parameter of interest. Given a prior estimate R0 of R, we are looking for an estimator that

      we will develop several shrinkage estimators of R that incorporates the experimenters of guess which is R0. The suggested estimators are of the form = + (1 )R0, 0 1. We will determine the value of c in the following ways;

      1. Shrinkage towards a Pre-specified R

        Here we are looking for c1 in the estimator =

        1 + (1 1)0 that minimizes its mean square error

        1 = ( 1 )2 = [(1 + (1 1 )0)

        ]2. The value of c1 that minimizes this MSE can be shown to be

        1 =

        0

        0

        [ 0 0 ] [ 2 20 + 2 ], subject to 0 1 1. However this value of c1 depends on the unknown parameter R. Substituting instead of R we get

        0

        0

        1 = [ 0 0 ] [ 2 20 + 2 ]

        . Hence, our shrinkage estimator is 1 = 1 1 + (1

        1 )0.

        We now obtain approximate values of ( ) and

        var( ). Notice that = 1 = 1 , and hence

        ( 1 + 2) 1+( 2 1)

        ( 2 1) = 1 1. Thus (1 2)( 2 1) = 1 2 [ 1 1]. It is shown in the next section that = (1 2) ( 2 1)~22,2 1 .

        Following Lindley (1969), Balkizi (2003),we get

        =

        (1 + 2 1 )1 + 2 1 2(1 +

        21)3, = 212(1+21)2

        where = 1 (1 1),

        1

        1

        = [2(1+2 1)] [2(1 1)(1 2)]; in

        these formulas 1 and2 are further replaced by 1 and

        2respectively, for numerical computation.

      2. Shrinkage Using the p-value of the LRT

        For testing 0 : = 0 vs. 1 : 0, the likelihood ratio test is the form: reject 0 when ( 2 1) < 1 or

        2 1 > 2. his follows by noticing that 0 : =

        0 vs. 1 : 0is equivalent to 0 : 1 =

        02 1 0 vs. 1 : 1 02 1 0 . The MLEs of 1 and 2 are 1and 2 respectively, while the restricted MLEs of 1 and 2 are given by

        1 1 + 2 1 1 + 0 1 0 2 2 and

        1 + 1 + ,

    3. A simulation study is conducted to investigate the performance of the estimators 1 and 2. The nomenclature of our simulations is as follows.

      n1: number of X observations and is taken to be 10 and 30

      n2: number of Y observations and is taken to be 10 and 30

      R: the true value of R=p[Y<X] and is taken to be 0.5, 0.6,and 0.8

      R0: The initial estimate of R and is taken to be 0.3,0.4,0.5,0.6,0.7 when R=0.5

      0.4,0.5,0.6,0.7,0.8 when R=0.6

      0.6,0.7,0.8,0.85,0.9 when R=0.8

      Fixing =2, for each combination of n1,n2, R, R0, 1000 samples were generated for X taking =2 and for Y with 2=(1/R2)-1. The estimators are calculated and the efficiencies of shrinkage estimators relative to the maximum likelihood estimator are obtained. The relative efficiency is calculated as the ratio of mean square error of the MLE to the mean square error of the shrinkage estimator.

    From the following table it is observed that shrinkage estimators are more efficient than the maximum likelihood estimator. But the estimator 1 performs better than the estimator 2. In terms of sample sizes, the shrinkage estimators seems to perform better for small sample sizes than the large sample sizes. This is

    1 2 0

    0 1 1 2 2

    expected, as sample size increases, the precision of ML

    respectively. Application of the likelihood criterion leads directly to the result. Notice that

    (21 1 1)~ > 2 and (22 2 2)~ > 2 ;

    estimator increases, whereas the shrinkage estimators are still affected by the prior guess R0 which may be poorly made. our simulation show that the shrinkage

    2 1

    2 2

    estimators, are successful in taking advantage of prior

    therefore [(22 2 2) 22] [(21 1 1) 21] = (1 2) (2 1)~2 ,2 . Under

    guess. The use of shrinkage estimator is worth

    2 1 considering if available sample size is small.

    0 , W=(0 (1 0))( 2 1)~22.2 1 .

    The p-value for this test is = 2 min 0 >

    ,0<=2min[[1],()], where w is the

    observed value of test statistic W, and F is the distribution of W under H0. The p-value of this test indicates how strongly H0 is supportedby the data. A large p-value indicates that R is close to prior estimate R0 (Tse and Tso, 1996). Thus we use this p-value to form the shrinkage estimator 2 = 2 1 + (1 2)0, where (1 2 ) is the p-value of the test.

    Table 1. Relative efficiencies of the estimators where R=0.5

    n1

    n2

    R0

    RE1

    RE2

    10

    10

    0.3

    2.8879

    1.0017

    10

    10

    0.4

    8.0272

    1.0296

    10

    10

    0.5

    16.0253

    1.1737

    10

    10

    0.6

    2.7767

    1.1835

    10

    10

    0.7

    0.8433

    1.0002

    10

    30

    0.3

    4.8745

    0.9996

    10

    30

    0.4

    11.968

    1.0055

    10

    30

    0.5

    16.2556

    1.1617

    10

    30

    0.6

    2.5643

    1.1229

    10

    30

    0.7

    0.7598

    0.9604

    30

    10

    0.3

    2.4962

    1.0013

    30

    10

    0.4

    5.3945

    1.0211

    30

    10

    0.5

    8.0272

    1.1209

    30

    10

    0.6

    2.5432

    1.0967

    30

    10

    0.7

    0.8718

    0.9980

    30

    30

    0.3

    3.0269

    0.9986

    30

    30

    0.4

    6.2861

    1.0009

    30

    30

    0.5

    9.1161

    1.0259

    30

    30

    0.6

    2.3367

    1.0043

    30

    30

    0.7

    0.7413

    0.9595

    n1

    n2

    R0

    RE1

    RE2

    10

    10

    0.4

    2.8502

    1.0022

    10

    10

    0.5

    6.7174

    1.0243

    10

    10

    0.6

    10.5170

    1.1651

    10

    10

    0.7

    2.2350

    1.1770

    10

    10

    0.8

    0.6983

    0.9446

    10

    30

    0.4

    4.4554

    1.0000

    10

    30

    0.5

    9.8900

    1.0012

    10

    30

    0.6

    11.4739

    1.0734

    10

    30

    0.7

    2.0730

    1.1108

    10

    30

    0.8

    0.6103

    0.9271

    30

    10

    0.4

    2.2647

    0.9980

    30

    10

    0.5

    4.1654

    1.0328

    30

    10

    0.6

    5.6120

    1.1319

    30

    10

    0.7

    2.0312

    1.1103

    30

    10

    0.8

    0.6928

    0.9445

    30

    30

    0.4

    2.6305

    0.9988

    30

    30

    0.5

    4.8015

    1.0080

    30

    30

    0.6

    6.5371

    1.0057

    30

    30

    0.7

    1.9830

    1.0249

    30

    30

    0.8

    0.6145

    0.8944

    n1

    n2

    R0

    RE1

    RE2

    10

    10

    0.4

    2.8502

    1.0022

    10

    10

    0.5

    6.7174

    1.0243

    10

    10

    0.6

    10.5170

    1.1651

    10

    10

    0.7

    2.2350

    1.1770

    10

    10

    0.8

    0.6983

    0.9446

    10

    30

    0.4

    4.4554

    1.0000

    10

    30

    0.5

    9.8900

    1.0012

    10

    30

    0.6

    11.4739

    1.0734

    10

    30

    0.7

    2.0730

    1.1108

    10

    30

    0.8

    0.6103

    0.9271

    30

    10

    0.4

    2.2647

    0.9980

    30

    10

    0.5

    4.1654

    1.0328

    30

    10

    0.6

    5.6120

    1.1319

    30

    10

    0.7

    2.0312

    1.1103

    30

    10

    0.8

    0.6928

    0.9445

    30

    30

    0.4

    2.6305

    0.9988

    30

    30

    0.5

    4.8015

    1.0080

    30

    30

    0.6

    6.5371

    1.0057

    30

    30

    0.7

    1.9830

    1.0249

    30

    30

    0.8

    0.6145

    0.8944

    Table:2 Relative efficiencies of the estimators where R=0.6

    Table:3 Relative efficiencies of the estimators where R=0.8

    n1

    n2

    R0

    RE1

    RE2

    10

    10

    0.6

    1.7065

    1.0008

    10

    10

    0.7

    3.2715

    1.0159

    10

    10

    0.8

    7.2788

    1.1690

    10

    10

    0.85

    2.5709

    1.1778

    10

    10

    0.9

    0.8517

    1.0037

    10

    30

    0.6

    3.1512

    0.9978

    10

    30

    0.7

    p>4.7595

    1.0002

    10

    30

    0.8

    7.2358

    1.0996

    10

    30

    0.85

    2.4804

    1.1184

    10

    30

    0.9

    0.8188

    0.9725

    30

    10

    0.6

    1.3380

    0.9931

    30

    10

    0.7

    2.1688

    1.0134

    30

    10

    0.8

    4.1213

    1.1670

    30

    10

    0.85

    2.2215

    1.1193

    30

    10

    0.9

    0.8462

    1.0011

    30

    30

    0.6

    1.6695

    0.9909

    30

    30

    0.7

    2.4007

    0.9998

    30

    30

    0.8

    4.2925

    1.0712

    30

    30

    0.85

    2.2277

    1.0370

    30

    30

    0.9

    0.8453

    0.9551

    1. A. Balkizi, W. A. Dayyeh, "Shrinkage estimation of P(Y<X) in the Exponential case", Comm. Stat. Simul. Comp.,2003, 32, 31-42.

    2. AL-Hussain, E., Mousa, K.Sultan, "Parametric and non- parametric estimation of p(Y<X) for finite mixtures of lognormal components", Comm. Stat. Theor. and Meth., 1997, 26,1269-1289.

    3. A. Awad, M. Azzam, Mial model".Hamdan, "Some inference results on p(Y<X) in bivariate exponent model" Comm. Stat. Theor. and Meth., 1981, 10,1215-1225..

    4. P. Enis., S. Geisser, "Estimation of prabability that Y<X", J. Amer. Stat. Assoc., 1971, 66, 162-168.

    5. D. V. Lindley "Introduction to probability and Statistics from a Bayesian Viewpoint" Vol. 1. Cambridge University Press.

    6. J. Thompson, "Some shrinkage techniques for estimating the mean", 1968, J. Amer. Stat. Assoc. 63, 113-122

    7. S. Tse, G. Tso "Shrinkage estimation of reliability for exponentially distributed lifetimes", Comm. Stat. Theor. and Meth.,1996, 25, 415-430

    Leave a Reply