Empirical Bayes Estimation for Exponential Model Using Non-parameter Polynomial Density Estimator

In this study, we study the empirical Bayes estimation of the parameter of the exponential distribution. In the empirical Bayes procedure, we employ the non-parameter polynomial density estimator to the estimation of the unknown marginal probability density function, instead of estimating the unknown prior probability density function of the parameter. Empirical Bayes estimators are derived for the parameter of the exponential distribution under squared error and LINEX loss functions. We use numerical examples to compare the empirical Bayes estimators we obtained under squared error and LINEX loss functions and we get the result of the mean square error of the empirical Bayes estimator under LINEX loss is usually smaller than the estimator under squared error loss function, so it is more better.


INTRODUCTION
The exponential distribution is one of the most important distributions in life-testing and reliability studies.Inference procedures for the exponential model have been discussed by many authors.Cohen and Helm (1973), Sinha and Kim (1985), Shalaby (1992), Basubramanian and Balakrishnan (1992), Balakrishnan et al. (2005), Jaheen (2004), Ng et al. (2009) and Schenk et al. (2011) and reference therein.
Suppose that X be a random variable drawn from exponential distribution (denoted by Еxp(θ)), with probability density function (pdf): where, θ is the failure rate parameter.
In this study, we will employ empirical Bayes method to the estimation of the parameter of exponential distribution.Empirical Bayes method, first introduced by Robbines (1964), has been widely explored and applied.He constructed empirical Bayes estimators without any parametric assumption on the prior density function.That is, in the empirical bayes procedures the prior distribution of the unknown parameter is assumed to be unknown.The empirical bayes procedure has many applications.It is widely used in solution of Biological, ecological and medical problems, in reliability theory, in insurance statistics, etc.There are varies essays using empirical Bayes method in estimating problem, namely, Chen and Liu (2008) and Ren (2012).A number of papers were concerned with empirical Bayes estimation for specific families of conditional distributions (Jugde et al., 1990;Grabski and Sarhan, 1996;Sarhan, 1999).

BAYES ESTIMATION
Generally, the posterior pdf of θ given X = x is related to prior pdf g(θ) of the unknown parameter θ and the joint conditional pdf of the observed data X given θ ƒ(x; θ), by: Let Z = Z(X) be a sufficient statistic for θ.Then the posterior pdf of θ depends on X only through the value z of the sufficient statistics Z.It means that g(θ|x) = g(θ|z).
In Bayesian estimation, the loss function plays an important role and the squared error loss as the most common symmetric loss function is widely used due to its great analysis properties.And the Squared Error Loss Function (SELF) L(ߠ ,θ) = (ߠ -θ) 2 , which is a symmetrical loss function that assigns equal losses to overestimation and underestimation.However, in many practical problems, overestimation and underestimation will make different consequents.For example, in the estimation of reliability and failure rate functions, an overestimate is usually much more serious than underestimate; in this case the use of symmetric loss function may be inappropriate as has been recognized by Basu and Ebrahimi (1991).This leads us to think that an asymmetrical loss function may be more appropriate.Varian (1975) and Zellner (1986) proposed an asymmetric loss function known as the LINEX function, which draws great attention by many researchers, such as Al-Aboud (2009), Pandey and Rao (2009) and Calabria and Pulcini (1994).The LINEX loss is expressed as: where, ∆ = ߠ − ߠ and ߠ is an estimator of θ and α is a constant.
This sign and magnitude of the shape parameter a represents the direction and degree of symmetry respectively (If α>0, the overestimation is more serious than underestimation and viceversa).For a closed to zero, the LINEX loss is approximately squared error loss and therefore it is almost symmetric.
The posterior expectation of the LINEX loss function ( 5) is: where, E θ (.) denotes the posterior expectation with respect to the posterior density of θ.The Bayes estimator of θ, denoted by ߠ under the LINEX loss function is the value ߠ which minimizes (4), it is: Provided that the expectation E θ ( e -αθ )exists and is finite.
If the squared error loss function is used for each possible value of θ, then the Bayes estimator for θ is defined as the posterior mean of θ given z = Z(X), that is: Under the LINEX loss function, the Bayes estimator of θ, is given by:

EMPIRICAL BAYES ESTIMATION
In what follows we summarize a brief discussion of the technique which will be adopted to construct the procedure.It is assumed that there are (n+1) repeated or (n+1) independented experiments.In each experiment, we observe a random variable X that has a pdf ƒ(x; θ), indexed by an unknown parameter θ.The parameter θ is assumed to be an unobservable random variable with the same (but unknown) prior pdf in each experiment.In the first experiment, a random sample with size k of independent failure times X 1 = (X 11 , X 12 ,.., X 1k ), is observed from ƒ (x;θ) with unknown parameter θ and a value of the sufficient statistic Z(X 1 ) , denoted by z 1 , is calculated.In the second experiment, a random sample with the same size k of independent failure times X 2 = (X 21 , X 22 ,…, X 2k ), is observed from ƒ (x; θ) with unknown parameter θ and a value of the sufficient statistic Z(X 2 ), denoted by z 2 , is calculated.The value of the unknown parameter θ in the second experiment may differ from the value in the first experiment.This procedure is repeated until the nth experiment.In the (n + 1) th experiment, a random sample with size c, which may differ from k, of independent failure times X = (X 1 , X 2 ,…, X c ), is observed from ƒ(x; θ) with the same unknown parameter θ and a value of the sufficient statistic Z(X), denoted by z , is required.For convenience, we call information to the values ′n, k: z 1 , z 2 ,…, z n ′ obtained from the first n experiments is the past information, say Z f .While the information X obtained from the (n + 1) th experiment as the current information, or current sample.
In the procedure discussed here, we shall not estimate g (θ).Instead, we estimate the marginal pdf, ƒ g (z), of the sufficient statistic Z.We use a nonparametric polynomial density method to estimate such function based on the sample of sufficient statistic z 1 , z 2 ,…, z n , obtained from the past experiment.
Definition: Martz and Lwin (1989) Let X = (X 1 , X 2 ,…, X c ) be a random sample obtained from the pdf ƒ(x; θ), then a statistic Z (X) is called a sufficient statistic for θ if the joint pdf of X given θ can be factored as: where, ƒ z (z|θ) : The conditional pdf of Z given θ H (x) : A real valued function of x Theorem: In the following discussion, we always suppose that X = (X 1 , X 2 ,…, X c ) is a random sample with size n which is drawn from exponential distribution (1), let z be a value of the statistic Z(X) ∑ ܺ݅ ିଵ then ƒ(x; θ) = ƒ z (z|θ) H(x), where, ƒ z (z|θ) is the conditional pdf of Z given θ and H(x) is a real valued function of x.Then Under the squared error loss function, the Bayes estimator of θ is given as: Under the LINEX loss function, the Bayes estimator of θ is given as: Proof: Given the current sample X 1 , X 2 ,…, X c from the exponential distribution having the pdf(1), the joint pdf of X = (X 1 , X 2 ,…, X c ) becomes: , then the function ƒ(x; θ) can be written in the following form: Then Z(X) = ∑ ܺ݅ ିଵ is called a sufficient statistic.Using (11), the marginal pdf of Z becomes: The first derivative of Eq. ( 12) with respect to z gives: Then we have: Then, under the squared error loss function, the Bayes estimator of θ is given as: Remark: When the prior density function g(θ) is unknown, the past information Z p can be used to estimate the marginal pdf ƒ g (z) of the sufficient statistics Z.The estimated pdf of ƒ g (z) will be denoted by ƒ g (z).The empirical Bayes estimator for θ can be calculated by substituting the estimated function ƒ g (z) and its derivative, calculated at a value of the sufficient static Z obtained from the current experiment, into the previous formulate.That is, the empirical Bayes estimators of θ under the squared error and LINEX loss functions are given, respectively, by: Without loss of generality, we present in what follows the non-parametric polynomial density estimator for the marginal pdf ƒ g (z) when the observable random variable X has an exponential distribution with unknown failure rate θ.The nonparametric polynomial density estimator with order m (m≥0) for ƒ g (z) is given as Sarhan (2003) where, x 0 is a specified value of the observable random variable X and the coefficients α i (i = 0, 1, …, m) are given with the form: The simulation study, in this sample, is carried out according to the following scheme: • For given values of θ = θ 0 and c, generating a random sample with size c from the exponential distribution (1) and a select a value x 0 of the generating sample and calculating the value of the respective sufficient statistic z. • Selecting the value of (n, m, k).
• n experiments are simulated.In each of them, a random sample of size k from the exponential distribution with θ = θ 0 is generated.• The values of the sufficient statistics z 1 , z 2 ,…, z n are computed using the generated data obtained in Step (3).The MLEs r 1 ,r 2 ,…,r n at the specified time x 0 are calculated according to r j = exp(-kx 0 /z j ), The degree m of m-NPDE ƒ g (z) is specified.
• The coefficients α i (i = 0, 1,…, m) are computed using r 1 , r 2 ,…, r n according to Eq. ( 19).• The m-NPDE ݂ መ g (z) is formulated, according to Eq. ( 18) and its derivative ݂ መ ‫)ݖ(݃‬ is obtained from (18).• Steps (1)-( 7) are repeated N = 2000 times.The risks under squared-error loss of the estimates are computed by using: the LINEX loss function, the Bayes estimator of θ is given as: Numerical example and conclusion:To illustrate the previous results, a Monte Carlo simulation study is presented next.The criterion of comparison is made possible by computing risk of mean square error loss of estimators.