**3.1 Bayesian estimation of scale parameter for Weibull distribution**

Suppose we have a sample that follows the Weibull distribution shown in Eq. (1) and the appropriate prior distribution for the parameter (θ) is inverse gamma distribution according to the following formula [5]:

*Bayesian Inference - Recent Advantages*

$$\mathbf{f}(\boldsymbol{\theta}/\mathbf{a}, \mathbf{b}) = \frac{\mathbf{b}^{\mathbf{a}}}{\mathbf{r}(\mathbf{a})} \boldsymbol{\theta}^{-\mathbf{a}-1} \mathbf{e}^{-\frac{\mathbf{b}}{\mathbf{a}}} \tag{5}$$

By using the Bayes rule, we get the posterior distribution, as shown below [6]:

$$\mathbf{f}(\theta/\mathbf{t}) = \frac{\mathbf{f}(\mathbf{t}\_1, \mathbf{t}\_2, \dots, \mathbf{t}\_n/\theta, \emptyset) \mathbf{f}(\theta/\mathbf{a}, \mathbf{b})}{\int \mathbf{f}(\mathbf{t}\_1, \mathbf{t}\_2, \dots, \mathbf{t}\_n/\theta, \emptyset) \mathbf{f}(\theta/\mathbf{a}, \mathbf{b}) d\theta} \tag{6}$$

$$= \frac{\left(\sum\_{i=1}^n \mathbf{t}\_i^{\theta} + \mathbf{b}\right)^{(\mathbf{a}+\mathbf{n})}}{\mathbf{r}(\mathbf{a}+\mathbf{n})} \theta^{-(\mathbf{a}+\mathbf{n})-1} \mathbf{e}^{-\frac{\left(\sum\_{i=1}^n \theta\_i + b\right)}{\mathbf{a}}}$$

Since:

$$\sum\_{i=1}^{n} \mathbf{t}\_i^{\beta} = \boldsymbol{\pi}(\mathbf{t})$$

fð Þ� θ*=*t Inverse Gamma að Þ þ n, τð Þþ t b

Eq. (6) represents the posterior distribution for the parameter (θ), and according to the squared loss function, the Bayesian estimator for the parameter (θ) is the mean of the posterior distribution, as in the following steps [7]:

$$\mathbf{E}(\boldsymbol{\theta}/\mathbf{t}) = \hat{\boldsymbol{\theta}}$$

$$\hat{\boldsymbol{\theta}} = \int\_0^\infty \boldsymbol{\Theta} \mathbf{f}(\boldsymbol{\theta}/\mathbf{t}) \mathbf{d}\boldsymbol{\theta}$$

$$= \int\_0^\infty \boldsymbol{\Theta} \frac{(\boldsymbol{\tau}(\mathbf{t}) + \mathbf{b})^{(\mathbf{a} + \mathbf{n})}}{\mathbf{r}(\mathbf{a} + \mathbf{n})} \boldsymbol{\Theta}^{-(\mathbf{a} + \mathbf{n}) - 1} \mathbf{e}^{-\frac{(\mathbf{r}(\mathbf{t}) + \mathbf{b})}{\mathbf{\theta}}} \mathbf{d}\boldsymbol{\theta}$$

$$= \int\_0^\infty \frac{(\boldsymbol{\tau}(\mathbf{t}) + \mathbf{b})^{(\mathbf{a} + \mathbf{n})}}{\mathbf{r}(\mathbf{a} + \mathbf{n})} \boldsymbol{\Theta}^{-(\mathbf{a} + \mathbf{n}) - 1} \mathbf{e}^{-\frac{(\mathbf{r}(\mathbf{t}) + \mathbf{b})}{\mathbf{\theta}}} \mathbf{d}\boldsymbol{\theta}$$

By using the transformation:

$$\text{Let } \mathbf{y} = \frac{(\mathbf{r}(\mathbf{t}) + \mathbf{b})}{\theta} \Rightarrow \theta = \frac{(\mathbf{r}(\mathbf{t}) + \mathbf{b})}{\mathbf{y}}, \mathbf{J} = \frac{(\mathbf{r}(\mathbf{t}) + \mathbf{b})}{\mathbf{y}^2}$$

$$\hat{\boldsymbol{\Theta}} = \int\_0^\infty \frac{(\mathbf{r}(\mathbf{t}) + \mathbf{b})^{(\mathbf{a} + \mathbf{n})}}{\mathbf{r}(\mathbf{a} + \mathbf{n})} \left(\frac{\mathbf{r}(\mathbf{t}) + \mathbf{b}}{\mathbf{y}}\right)^{-(\mathbf{a} + \mathbf{n})} \mathbf{e}^{-\mathbf{y}} \frac{(\mathbf{r}(\mathbf{t}) + \mathbf{b})}{\mathbf{y}^2} \,\mathrm{d}\mathbf{y}$$

$$\hat{\boldsymbol{\Theta}} = \frac{(\mathbf{r}(\mathbf{t}) + \mathbf{b})^{(\mathbf{a} + \mathbf{n} - \mathbf{a} - \mathbf{n} + 1)}}{\mathbf{r}(\mathbf{a} + \mathbf{n})} \int\_0^\infty \left(\frac{\mathbf{1}}{\mathbf{y}}\right)^{-(\mathbf{a} + \mathbf{n}) + 2} \mathbf{e}^{-\mathbf{y}} \,\mathrm{d}\mathbf{y}$$

*Robust Bayesian Estimation DOI: http://dx.doi.org/10.5772/intechopen.104090*

$$
\hat{\boldsymbol{\Theta}} = \frac{(\boldsymbol{\pi}(\mathbf{t}) + \mathbf{b})}{\mathbf{r}(\mathbf{a} + \mathbf{n})} \int\_0^\infty \mathbf{y}^{\mathbf{a} + \mathbf{n} - 2} \mathbf{e}^{-\mathbf{y}} \, \mathrm{d}\mathbf{y}
$$

$$
\hat{\boldsymbol{\Theta}} = \frac{(\boldsymbol{\pi}(\mathbf{t}) + \mathbf{b})}{\mathbf{r}(\mathbf{a} + \mathbf{n})} \, \mathrm{r}(\mathbf{a} + \mathbf{n} - \mathbf{1})
$$

$$
\hat{\boldsymbol{\Theta}} = \frac{(\boldsymbol{\pi}(\mathbf{t}) + \mathbf{b})}{\mathbf{a} + \mathbf{n} - \mathbf{1}} \tag{7}
$$
