**2. Classical extreme value theory**

The core of the classical Extreme value theorem is the study of the stochastic behaviour of some maximum (or minimum) of a sequence of random variables. Define

$$M\_n = \max\left\{Y\_1, \dots, Y\_n\right\} \tag{1}$$

where f g *Y*1, … , *Yn* is a sequence of independent random variables with a common distribution function F. *Mn* represents the maxima (minima) of observed process over n blocks or time units. If F is known, the distribution of *Mn* can be derived as follows:

$$P\{M\_n \le y\} = P\{Y\_1 \le y, \dots, Y\_n \le y\}$$

$$= P\{Y\_1 \le y\} \times \dots \times P\{Y\_n \le y\}$$

$$= \{F(y)\}^n$$

However, F is usually unknown in practice, and will have to be estimated from the data. This poses a problem since a small error in the estimation of F can lead to large disparities for *<sup>F</sup><sup>n</sup>*f g*<sup>y</sup>* . An alternative approach is to model *<sup>F</sup><sup>n</sup>*f g*<sup>y</sup>* through asymptotic theory of *Mn*, where we study the behaviour of *<sup>F</sup><sup>n</sup>*f g*<sup>y</sup>* as *<sup>n</sup>* tends towards infinity. Since *F y*ð Þ< 1 for *y*<*ysup*, where *ysup* is the upper end-point of F, we have *<sup>F</sup>n*f g*<sup>y</sup>* ! 0 as *<sup>n</sup>* ! <sup>∞</sup>. We can remove the degeneracy problem by allowing some linear re-normalisation of *Mn*. Consider a linear re-normalisation:

$$
\hat{M}\_n = \frac{M\_n - d\_n}{c\_n} \tag{3}
$$

where f g *cn* and f g *dn* are sequences of constants with *cn* > 0. Under a suitable choice of *cn* and *dn*, the distribution of *Mn* can be stabilised and which leads to extremal types theorem:

Theorem 1.1 [Extremal Types Theorem] For a non-degenerate distribution function, G, if there exists sequences of constants f g *cn* >0 and f g *dn* , as *n* ! ∞, such that

$$P\left(\frac{M\_n - d\_n}{c\_n} \le \mathcal{Y}\right) \to G(\mathcal{Y})\tag{4}$$

then G belongs to one of the following families: Gumbel:

$$G(\mathbf{y}) = \exp\left\{-\exp\left[-\left(\frac{\mathbf{y} - d}{c}\right)\right]\right\}, \quad -\infty < \mathbf{y} < \infty \tag{5}$$

Frechet:

The exceedance (severity) are, on the other hand, assumed to follow a Gumbel Distribution, which is a special case of the General Extreme Distribution. Hence, the model is usually referred to as Poisson-Gumbel. Tebfu and Fengshi [7] also used CEVD to model hurricane characteristics along the Atlantic coasts and the Gulf of Mexico. They assume that the number of exceedance follows a Poisson Distribution,

Initially, CEVD was mostly used in hydrology, to model wave heights and the resulting extreme events. The model has been successfully used to predict design wave height. For instance, Hurricane Katrina of 2005 corresponded to 60 years return period as predicted by the Poisson-Weibull model [17]. As a result, there has been several extensions to these class of models including the Bivariate Compound Extreme Value Distribution (BCEVD) model. [18] and Multivariate Compound Extreme Value Distribution (MCEVD) model [17]. In addition, the model has been adopted in a wider range of areas, including finance, insurance, disasters and

[19] investigate the global historical occurrences of tsunamis. They compare the distribution of the number of annual tsunami events using a Poisson distribution and a Negative binomial distribution. The latter provides a consistent fit to tsunami events whose height is greater than one. They also also investigate the interval distribution using gamma and exponential distributions. The former is found to be a better fit, suggesting that the number of tsunami events is not a Poisson process. [20] study tsunami events in the USA. They assume that the occurrence frequency of tsunami in each year follow a Poisson distribution. They then identify the distribution of tsunami heights by fitting six distributions: Gumbel, Log normal, Weibull, maximum entropy, and GPD. They select GPD, which had the best fit. They use MLE for parameter estimation, and investigate the fit of the Poisson Compound Extreme Value Distribution using goodness-of-fit statistics. The results is consistent with [19], that the Poisson-Generalised Pareto Distribution is appro-

The core of the classical Extreme value theorem is the study of the stochastic behaviour of some maximum (or minimum) of a sequence of random variables.

where f g *Y*1, … , *Yn* is a sequence of independent random variables with a common distribution function F. *Mn* represents the maxima (minima) of observed process over n blocks or time units. If F is known, the distribution of *Mn* can be

However, F is usually unknown in practice, and will have to be estimated from the data. This poses a problem since a small error in the estimation of F can lead to large disparities for *<sup>F</sup><sup>n</sup>*f g*<sup>y</sup>* . An alternative approach is to model *<sup>F</sup><sup>n</sup>*f g*<sup>y</sup>* through asymptotic theory of *Mn*, where we study the behaviour of *<sup>F</sup><sup>n</sup>*f g*<sup>y</sup>* as *<sup>n</sup>* tends towards

¼ *P Y*f g <sup>1</sup> ≤ *y* � … � *P Y*f g *<sup>n</sup>* ≤ *y*

*P M*f g *<sup>n</sup>* ≤*y* ¼ *P Y*f g <sup>1</sup> ≤ *y*, … , *Yn* ≤*y*

<sup>¼</sup> f g *F y*ð Þ *<sup>n</sup>*

*Mn* ¼ max f g *Y*1, … , *Yn* (1)

(2)

and the exceedance is Weibull Distribution.

*Natural Hazards - Impacts, Adjustments and Resilience*

catastrophic modelling.

priate for disaster modelling.

Define

**276**

derived as follows:

**2. Classical extreme value theory**

$$G(\mathbf{y}) = \begin{cases} 0 & \mathbf{y} \le d \\ \exp\left\{-\left(\frac{\mathbf{y} - d}{c}\right)^{-\alpha}\right\} & \mathbf{y} > d \end{cases} \tag{6}$$

Weibull:

$$G(\boldsymbol{y}) = \left\{ \begin{array}{c} \exp \left\{ -\left[ -\left( \frac{\boldsymbol{y} - \boldsymbol{d}}{c} \right) \right]^{a} \right\} & \boldsymbol{y} < d \\ \mathbf{1} & \boldsymbol{y} \ge d \end{array} \right. \tag{7}$$

for *c*>0 and *d*∈ .

The proof of this theory is presented in [21]. The three classes of distributions are called extreme value distributions, with type I (Gumbel), type II (Frechet) and type III (Weibull) respectively. The extremal types theorem suggests that regardless of the population distribution of *Mn*, if a non-degenerate limit can be obtained by linear re-normalisation, then the limit distribution will be one of the three extreme value distributions.

In modelling an unknown population distribution, we choose one of the three types of limiting distributions and then estimate the model parameters. This approach is, however, deemed to be inefficent as uncertainty associated with the choice is not considered in the subsequent inference [22]. A better approach is to combine the three models into one single family, with the distributions being special cases of the universal distribution.
