**7. Conclusion**

iv. The loss data may be fitted by a wide class of severity distributions. We used SAS PROC SEVERITY in order to identify the five best fitting

v. Calculate the ratios *R*ð Þ7 , *R*ð Þ 7, 20 , *R*ð Þ 20, 100 and *R*ð Þ 100 of the best fitting distributions obtained above and then select the best distribution based on the ratios. Although this is a subjective selection it will lead to more realistic

vi. For the best fitting distribution, present the ratios that deviate significantly from one to the experts for possible re-assessment. If new assessments are

above assumes one unified dataset for the historical data source. In practice different datasets are included for example internal, external and mixed where the latter is scaled. Estimates of *q*<sup>1</sup> and *q*<sup>7</sup> based on these different

vii. Different data sources should be considered. The approaches discussed

viii. Guideline vi may also be repeated on appropriate mixed (scaled) data sets

*Data Scaling.* It is practice in operational risk management to use different data sources for modelling future losses. Banks have been collecting their own data, but realistically, most banks only have between five and ten years of reliable loss data. To address this shortcoming, loss data from external sources can be used by banks in addition to their own internal loss data and controls. External loss data comprises operational risk losses experienced by third parties, including publicly available data, insurance data and consortium data. [16] investigate whether the size of operational risk losses is correlated with geographical region and firm size. They use a quantile matching algorithm to address statistical issues that arise when estimating loss scaling models when subjecting the data to a loss reporting threshold. [13] uses regression analysis based on the GAMLSS (generalised additive models for location scale and shape) framework to model the scaling properties. The severity of operational losses using the extreme value theory is used to account for the reporting bias

*No historical data available*. In the event of having insufficient historical data available, the GPD approach as discussed above may be used. *Te*ð Þ *x* in (2) can be estimated by a right truncated distribution, e.g. scaled beta, Pareto type II, etc. fitted to an expected loss scenario and *q*7. In this case the expert should also provide

*Aggregation*. To capture dependencies of potential operational risk losses across business lines or event types, the notion of copulas may be used (see [15]). Such dependencies may result from business cycles, bank-specific factors, or crossdependence of large events. Banks employing more granular modelling approaches may incorporate a dependence structure, using copulas to aggregate operational risk losses across business lines and/or event types for which separate operational risk

. *Tu*ð Þ *<sup>x</sup>* can be estimated by a GPD

provided, repeat guidelines iii to v once or twice.

*Linear and Non-Linear Financial Econometrics - Theory and Practice*

datasets should inform the scenario process.

to select the best distribution type.

**6. Some further practical considerations**

a scenario for the expected loss *EL* ¼ *E T*j*X* ≤*q*<sup>7</sup>

distribution as discussed in the GPD approach.

distributions.

choices.

of the external data losses.

models are used.

**28**

In this chapter, we motivated the use of Venter's approach whereby the severity distribution may be estimated using historical data and experts'scenario assessments jointly. The way in which historical data and scenario assessments are integrated incorporates measures of agreement between these data sources, which can be used to evaluate the quality of both. This method has been implemented by major international banks and we included guidelines for its practical implementation. As far as future research is concerned, we are investigating the effectiveness of using the ratios in assisting the experts with their assessments. Also, we are testing the effect of replacing *q*<sup>100</sup> with *q*<sup>50</sup> in the assessment process.
