Funded Pension Schemes in Aging Societies: A Pure Economic Argument?

*Ishay Wolf and Smadar Levi*

### **Abstract**

This study enables different angel to explore central planners' considerations regarding pension systems in a modern western market with aging influence. In particular, considerable weight has been given to the effect of the crisis due to the pandemic and frequent market turmoil. This study expands the number of players analyzed in the field and takes into consideration different interests among the current and future generations. In addition, we allow differentiation among earning cohorts. By using the overlapping generation model and Monte Carlo simulations, we find that in a wide macroeconomic range, pension equilibrium surprisingly stands with unfunded pension schemes despite the heavy aging influence. Contrary to the classic economic arguments by the World Bank and IMF that were widespread during the 1980s and 1990s, the choice of a pension system is much more complex. We find that the central planner must take into account not only the aging rhythm and market yield but also other parameters, such as the current and future utility perspective, the government's debt price, GDP per capita growth rate, risk aversion, and the possibility of market turmoil.

**Keywords:** pension system, risk sharing, social security, minimum pension guarantee, externalities, funded pension scheme

### **1. Introduction**

The western world deals with continuous aging, low fertility, and debt crisis that push governments toward funded-capitalized pension schemes [1–3]. A common trend indicates a decline in public pension benefits [4]. Moreover, systemic reforms have changed the nature of pension provisions, shifting more risks onto pension earners. The privatization of pension plans worldwide and the global process toward the appearance of more funded plans raise important thoughts regarding the adequacy and sustainability of pension schemes, their benefits, and falls [5, 6].

According to Milev, "The sharp downturn in the value of financial assets between 2007 and 2009 and the current financial crisis due to the COVID-19 pandemic serve as sharp examples of how risky assets quickly lose a significant part of their value" ([7], p. 2). The financial crises and continuing concerns about retirement security have generated a new interest regarding the role of the country to provide adequate old-age benefits to its citizens. We are witnessing a great wave of pension withdrawals from funded-capitalized schemes, moving toward more

governmental intervention. Indeed, according to Altiparmakov, "most of the countries experiencing similar crises were the first to implement new liberal pension schemes during the 1990s" ([8], p. 4).

Late research has demonstrated the importance of balancing funded schemes with unfunded components to increase adequacy and sustainability [6, 9, 10]. These studies strengthen the expanding policy and efforts of worldwide governments to start the economies after the pandemic shocks and in parallel insure old participants from the turmoil markets [11, 12].

In conjunction with the current fiscal expenditures lies the classic economic argument that countries should shift to funded pension schemes due to low fertility [13]. "The shrinking tax base and negative influence of governments on markets are the flags of the Washington Consensus, the World Bank during the 1990s, and other economic institutes" ([14], p. 2).

This composition argues that, from a wide perspective, the rush of governments toward funded pension schemes due to low fertility and fiscal constraints may not be optimal. The current complex environment influenced by the pandemic strengthens this argument. We base this on simple equilibriums in the pension markets based on different macro-economic assumptions. The novelty of this research is demonstrated in the wide array of interests taken into consideration. We avoid treating participants as a single-player and allow intergeneration and intra-generational risk sharing. The adjacent generations allow us to examine the cyclical tax burden, the influence of fertility on future generations, and the statistical returns in the long term. The split to earning cohorts demonstrates different interests of hedging capabilities and different optimal contribution rates, considering tax burden and insurance components in old-age benefits [15].

We suggest balancing funded pension schemes with "unfunded boxes," which may increase the sustainability of the pension system, improving the utility of all players. It is found that in some cases, which are common in Western economies, the optimal pension scheme is surprisingly the pay-as-you-go (PAYG) pension system, even in aging societies.

The next section details the interests of the different players in the pension field as well as the assumptions to the economic model. In Section 3, we set the stochastic model of the pension system, which maximizes the participants' utility, and analyzes how it is best to finance the guarantee. Section 4 provides the main results of the simulations and sensitivity analysis. In Section 5, we discuss the results and their implications, and the last section provides the conclusion.

#### **2. The government and the participants' interests**

It is common to determine that the government wishes to decrease its fiscal risks and obligations and hence push for a shift toward an unfunded pension scheme. The fiscal exposure of the government is obviously levied on its citizens [1]. Consequently, it should be the interest of the citizens to shift from the comfort PAYG DB pension scheme to a funded pension scheme. Indeed, some scholars, mainly during the 1990s, supported the transition to a funded scheme, trying to convince people that the alternative is a heavy tax burden [2, 3].

Disassembling the answer to this question to society and different players seems much more complex and far from unambiguous. Since information is not a free asset but a risk in pension systems, framing the argument in the second-best terms starts from the multiple objectives of pension systems. "Policy has to seek the best balance between consumption smoothing, poverty relief, and insurance, and this

#### *Funded Pension Schemes in Aging Societies: A Pure Economic Argument? DOI: http://dx.doi.org/10.5772/intechopen.101042*

balance will depend in each society on the weights given to those and other objectives and to the different constraints that societies face" [5].

This composition focuses on the central planner, which has the responsibility to balance the interests of all players—recognizing a variety of earning cohorts and adjacent generations. That variety of actors throughout its length and breadth may represent the entire government perspective. With that, we continue with Altiparmakov [8] and Wolf and Ocerin [9], who suggest that stable pension systems must seek an equilibrium between earning cohorts. Otherwise, the chances are high for pension reforms and reversals [16].

We expand previous overlapping generations (OLG) models [17] by including debt. The consideration of cycle government debt obligates the central planner to make sure that future generations will not be used as a heavy tax source. In the current research, we take future generations' utility as part of the total preferences of the society by simply discounting them. One may claim that the weight for future generations in preferences equations does not necessary derives from the participant's discount factor and may suggest greater weight. We agree with that argument and claim that the equilibrium in that case should still be calculated specifically for every market separately.

The second dimension is the differentiation between high- and low-earning cohorts. Wolf and Del Rio [10, 11, 18] have shown that by shifting to the funded pension scheme, a socio-economic anomaly exists because of the high exposure of low-earning cohorts to market and credit risk without the ability to hedge themselves. They also claim that the optimal contribution rates are generally close to high-earning preferences (see also [9]). In that case, the funded pension market should be included as "externalities," where high earners compensate low earners by risk-sharing. That may include, for example, minimum pension guarantee, intergenerational/intragenerational risk sharing of social security benefits. These processes clearly justify differentiating the considerations and interests of earning cohorts.

#### **3. Model setup**

We employ a simple OLG model to characterize optimal pension pillars'sizes. In each period, a new generation of unit mass is borne. We employ this model for four generations. For simplicity, each generation includes three equal life periods cycle frameworks as in Knell [19]: "Individuals work during the first two parts of their life, while they are retired in the third part. The first pillar is unfunded social security, and the second is in the form of individual accounts" ([19], p. 6).

#### **3.1 Consumers**

The consumer works over two periods of 23 years each and retires at the age of 67 (*s* ¼ *TR*). They live for another 23 years, represented by the third period, and are assumed to die at the age of 90 (*s* ¼ *TD*).

During the first 46 years, consumers work and earn a real labor income of *Wt*1. We allow for differentiation in wage levels across earning deciles. From this wage, the individual contributes to social security tax and funded pension fund. The participant consumes the residual after contributions.

During the retirement period (*TR* ≤*s*<*TD*Þ, the individual's consumption, *Ct*,*TR* , is given by the benefits both from public and funded pension pillars. These benefits are collectively denoted by *Pt*. The consumption of the generation *t* in time *s* can be described as follows:

*Accounting and Finance Innovations*

$$c\_{t,t} = \left\{ \begin{array}{c} W\_{t,t}(1-\tau), during\ work\ period \\ p\_{T\_R}^U + p\_{T\_R}^F, during\ treatment \end{array} \right\} \tag{1}$$

Individuals have a constant relative risk aversion (CRRA<sup>2</sup> ) utility function defined over a single nondurable consumption good. Let us define *δ* as the discount factor; *α* measures the curvature of the utility function or risk aversion level, so the individual's preferences are then defined by

$$U\_t = \sum\_{s=1}^{t=2} \delta^{t-1} \frac{1}{1-a} \left( c\_{t, t+s-1} \right)^{1-a} + \delta^2 \frac{1}{1-a} \left( c\_{t, T\_R} - mpg\_{t, T\_R} \right)^{1-a} \tag{2}$$

Here, *Ct*,*<sup>s</sup>* is the consumption level of generation *t* in period*s*, and *mpgt*,*<sup>s</sup>* is the level of government guarantee for generation *t* in period *s*.

Consumption is a function of the participant's wage and deductions due to pension contributions (funded and unfunded) and taxes financing government debt. Government's debt can be made due to financing pension guarantees or financing intergenerational gaps in PAYG DB due to aging. These payments are detailed in **Table 1**. In fact, the aging effect realizes in twofold positions. First, by increasing the real debt cost of the government, as fewer people participate in a specified burden. Second, by reducing PAYG benefits per specified contribution rate.

Consistently with the life cycle model of Ando and Modigliani [20], the participant is aware of future interest rate risks and adapts his consumption during the working phase accordingly. If the government supposes to collect extra tax payments to finance the interests of its debts, the individual adapts his/her consumption accordingly.

#### **3.2 Mix pension scheme with dominant funded pillar**

Rates of returns are uncertain (*ex ante* expected utility). The GDP per capita growth rate approximates the aggregate wage income, following the same method of Masten and Thorgesen [21], and Wolf and Ocerin [9]. We also assume that the real PAYG rate of return, *gs*þ<sup>1</sup>, is equal to the growth rate of wages or the change in the GDP per capita.


#### **Table 1.**

*Consumption in each pension scheme.*

<sup>2</sup> In the literature, it is common to use the coefficient of relative risk aversion, *RRA* � *<sup>U</sup>*<sup>00</sup> ð Þ*c U*0 ð Þ*<sup>c</sup>* <sup>∗</sup> *<sup>c</sup>*, for the utility function of the form.

<sup>1</sup> All variables used throughout this paper are expressed in real terms. It is assumed that wage inflation is identical to price inflation.

*Funded Pension Schemes in Aging Societies: A Pure Economic Argument? DOI: http://dx.doi.org/10.5772/intechopen.101042*

The parameter *gt* describes the evolution of wage,*W*, which follows a Brownian motion of the following form:

$$\frac{dW(t)}{W(t)} = d\mathbf{g}\_t = \mu\_\mathbf{g} dt + \sigma\_\mathbf{g} dB^W(t),\tag{3}$$

where *μ<sup>g</sup>* stands for the constant expectation of the instantaneous variation rate in the wage, *σ<sup>g</sup>* denotes its constant standard deviation, and *BW* represents a standard Brownian motion. The first phrase is a constant drift, and the second phrase is the volatility drift. The term *gt*þ<sup>1</sup> is the growth of labor income or the return on human capital.

The individual pays a fixed contribution rate*τ*. From that contribution, a share of *γ* is invested in a private-funded pillar and a share of 1ð Þ � *γ* finances in the unfunded pillar or the public social security. The pension benefit for generation *t* in the retirement period is denoted by

$$p\_{T\_R} = p\_{T\_R}^F + p\_{T\_R}^U. \tag{4}$$

Here, *pF <sup>t</sup>*þ<sup>2</sup> and *pU <sup>t</sup>*þ<sup>2</sup> represents the funded fund and social security (PAYG), respectively.

We allow a correlation between the GDP per capita and the fund asset return rate, thus

$$dB^W(t)dB^A(t) = \rho\_{w,A}dt,\tag{5}$$

with the condition 1≥*ρ<sup>w</sup>*,*<sup>A</sup>* ≥ � 1*:*

We assumed a constant social security benefit based on time of contributions. In each period, the working population's contributions are equal to the total benefit payments to retirees. Consequently, the public unfunded pension benefit is determined using the balanced budget condition of

$$\log \tau^U \left\{ \overline{W}\_{t+1, T\_R} N\_{t+1} \* A + \overline{W}\_{t+2, T\_R} N\_{t+2} \* A^2 \right\} = \sum\_{n=1}^{N\_{T\_R}} p\_t^U \tag{6}$$

Here, *τ<sup>U</sup>* is the contribution rate to social security, *Nt* is the size of the generation born in period *t*, and *p<sup>U</sup> <sup>t</sup>* is the unfunded pension benefits paid to generation *t* in the period *TR*. The term *φ* is the constant social security's old-age benefits/contribution ratio. The residual share 1ð Þ � *φ* of contributions finance other social expenses pertaining to Medicare, means-tested, minimum pension guarantee, disability benefits, unemployment benefits, and other social expenses. The tax base in each generation is shrinking due to the aging of societies. Consequently, *A* represents the aging factor of each contributor generation to social security.

Under the assumption of constant population growth, *nit*, the contribution *τUwt*,*<sup>s</sup>* is paid by generation *<sup>t</sup>* in time *<sup>s</sup>*; thus, there is a return of *gs*þ<sup>1</sup> <sup>¼</sup> <sup>ð</sup>*Wt*,*s*þ1/*Wt*,*<sup>s</sup>*<sup>Þ</sup> – 1. In addition, we assume the economic principle of Aaron [22] that the notional interest rate or the population growth rate is set equal to the growth rate of wages: *nit* ¼ *gt* . Hence, the unfunded benefit at retirement can be described in the following reduced form:

$$P\_{t,T\_R}^U = \wp(\mathbf{1} - \boldsymbol{\gamma})\pi \sum\_{t=1}^T \partial\_d \overline{\mathcal{W}}\_{t+1,T\_R} \* \left(\boldsymbol{A} + \boldsymbol{A}^2\right),\tag{7}$$

where *∂<sup>d</sup>* is a constant parameter per earning decile that adjusts the benefit to contribution level. This mechanism is similar to the Notional Defined Contribution (NDC) pension scheme and ensures higher benefits for high earners in relation to their contributions.

The funded-capitalized pillar is a private collective defined-contribution (DC) system with a fixed contribution rate. Individuals start with zero initial asset holdings. Subsequently, the individual adds the fraction of *γwt* to his accumulations during the working phase, which is invested in a constant portfolio mix of financial assets (equities, bonds, etc.). This accumulation earns an average annual rate of return of *rt*. This rate of return also follows a Brownian motion of the following form

$$dr\_s = \mu\_r dt + \sigma\_r B^A dt \tag{8}$$

Here, *rt* denotes the continuous rate expectation of the asset instantaneous return rate, *σ<sup>r</sup>* stands for its constant standard deviation, and *B<sup>A</sup>* indicates the standard Brownian motion. The first phrase from the left is a constant drift, and the second phrase is the volatility drift.

The funded pillar is equal to the accumulated capital from the contributions to the private collective defined-contribution fund during every working period until retirement (*TR*). The real capital is given by

$$\mathbf{p}\_t^F = \left(\mathbf{1} - T^f\right) \left(\mathbf{1} - I^f\right) \mathbf{r}^F \sum\_{t=t}^{T\_R} \mathbf{W}\_{t,r} \mathbf{r}\_t^{T\_R - t} \tag{9}$$

Here, *T <sup>f</sup>* is the effective tax rate on old-age funded fund's benefits. *I <sup>f</sup>* is the fraction from the contributions that represent insurance contributed from the pension fund, such as disability. Funded fund's liabilities are based on the current and future retiree's benefit payments. The funded benefit can be described more specifically as

$$\mathbf{p}\_t^F = \chi \tau \mathbf{W}\_t \mathbf{r}\_{t+1} \mathbf{r}\_{t+2} + \chi \tau \mathbf{W}\_{t+1} \mathbf{r}\_{t+2} \tag{10}$$

Due to the assumption that there is only one period of retirement, it is not necessary to specify how the pension capital of the funded pillar is annuitized or amortized, that is, transformed into annual pension installments.

#### **3.3 Pension guarantee**

The government considers implementing a minimum pension guarantee when imposing the funded pension scheme. The periodical guarantee is at the poverty level, meaning 0.6 of the median earnings decile. We calculate the cost of the guarantee as

$$\text{Guarante cost at time } t = p. \text{ } l \text{ at time } t - \left(p\_t^F + p\_t^U\right) \tag{11}$$

The poverty line itself is growing every period by the GDP per capita growth rate. However, the guarantee cost depends on the income inequality in the market and stays constant as a percentage from the GDP. The guarantee cost is financed by the government in the form of tax levied on future generations.

#### **3.4 PAYG DB pension scheme**

Pension benefits are calculated using the same method of the unfunded pillar described above. The difference is that total contributions are for the unfunded pillar ð Þ *γ* ¼ 0 . In addition, retirees benefit from the constant contribution level. The government, through debt, finances the exposure of aging, which reduces the intragenerational financing base.

$$P\_{t,T\_R}^{DB} = \rho \tau \sum\_{t=1}^{T} \partial\_d \overline{W}\_{t+1} \* \mathbf{2} \tag{12}$$

As the government keeps benefit retirees at the same original level before transition, the shrinking tax base is translated to a fiscal expenditure. That expenditure is financed by future generations as tax payments in the amount of

$$P\_{t,T\_R}^{DB} \text{ government share} = q\tau \sum\_{t=1}^{T} \partial\_d \overline{W}\_{t+1} \* \left(2 - \left(A + A^2\right)\right) \tag{13}$$

#### **3.5 Government debt**

Government finances two different obligations through debt and future tax. The first is the guarantee cost in mix funded pension scheme. The second is the aging influence of the intra-generational tax base from generation to generation.

For each of these expenses, we assume a debt cycle of four periods. In the first period, the fiscal expense is realized. Over the next two periods, the working population pays the interest rate component as tax, while during the fourth period, return also the principle in addition to the periodical interest payment. In total, in each period, the working generation pays three interest rate components of past debts and a single principle of past debt.

#### **3.6 Different earning cohorts**

We allow different preferences among earning cohorts. In fact, this diversity is one of the most important novelties of this research. We assume that high-earning cohorts benefit from a higher share of GDP growth than low earners, in increasing order. In parallel, high earners levy a higher share of tax payments, progressively. For example, the tax burden on decile 4 is only 5% from payment, while it is 30% on the highestearning decile. **Figure 1** summarizes the differentiation across different earning deciles.

We value the preferences of earning cohorts to the different pension schemes by the change of average utility computed according to each of the three pension schemes analyzed. For simplicity, we group these preferences by deciles 1–4 for low-earning cohorts and 7–10 for high-earning cohorts.

**Figure 1.** *Earning deciles.*

#### **4. Simulation and calibration**

The GDP per capita stochastic yields turn to be stochastic the variables of periodic wage, poverty line, defined benefit pension scheme, and social security. The market yield affects the funded pension pillar stochastically. We use Monte Carlo simulations to simulate the level of the guarantee cost in each generation and the level of governmental debt due to imposing defined benefits in each generation. Another set of Monte Carlo simulations is conducted to compute the preferences of each earning cohort for each generation among funded pension schemes, funded schemes with guarantees, and defined benefit pension schemes.

For each generation, the preference of pension scheme depends on the utility of each earning cohort in each generation. For comparability, we compute the relative preference of the mix pension scheme over the DB and respectively the preference of the mix pension scheme with a guarantee over the DB. Monte Carlo simulations simulate these pairs of ratios.

Analyzing the results, we make a differentiation between low- and high-earning cohorts. For each set of results, we discount the preferences of the four generations to a single number.

We calibrate the model as of the average western OECD country, using its updating database [4]. In the base scenario, the government capital cost is 0.5%, the GDP per capita is 1.6% per year, and the average net pension market yield is 3.74%. The contribution rates to pension pillars are derived from countries such as Denmark and Israel that run dominant funded pension schemes [4]. We take into consideration the aging trend in Western countries. We assume the high aging influence as conservative in analyzing the rush toward the funded scheme. In that case, similar to Germany and Spain, the dependency ratio increases by 0.4% every year. The sensitive analysis is conducted to map the trends of preferences as a


**Table 2.** *Calibration.* function of risk aversion and interest rate gap. We summarize the calibrations variables in **Table 2**.

## **5. Results and insights**

While there is no debt financing the funded pension scheme, there is small debt financing the DB pension scheme (the aging effect) with a constant percent from GDP. We map higher debt level financing the guarantee, reducing in time, if >*rf* .

In the Western market, the government interest rate is generally lower than the GDP per capita rate, while the market yield (*r*) is higher than both. In times when the difference between the market yield and the GDP per capita increases, markets will prefer to shift to a funded pension scheme and vice versa. Here, we point out the government capital price as also an important factor as it affects the preferences of future generations. A coherent pension system, which considers multiplayers' preferences, cannot avoid the tax/PAYG burden levied on the working population or future generation in the form of cycle tax payments.

#### **5.1 PAYG DB scheme vs. funded pension scheme**

For each generation, we check the preferences between PAYG DB and the funded pension scheme via 2100 Monte Carlo simulations. Each simulation calculates the OLG model with the aforementioned assumptions. **Figure 2** describes these preferences by earning cohorts and as a function of the rate of returns gaps (GDP per capita minus the government interest rate). The more positive the preference value, the more the preference tends toward the funded scheme. By the same logic, the more negative the value, the more they prefer the DB pension scheme.

As expected, high earners prefer the funded pension scheme, while low earners tend to prefer the DB scheme. For high earners, the reasons for this are the potential for higher benefits and the avoidance of financing pension gaps of unfunded transfers due to aging and the shrinking labor force.

Low-earning cohorts prefer the DB pension system as it enables insurance although the benefits in the funded scheme are higher on average. As time goes by, in both earning cohorts, the attractiveness of the funded scheme increases as the average returns of the funded scheme is higher than the GDP per capita, and naturally, the insurance for the long term is less considered in the utility measure.

When increasing the risk aversion coefficient from 3 to 5, low earners become almost indifferent between funded and unfunded pension schemes. This is because, in high-risk aversion measures, participants put considerable weight on their current consumption more than their old-age benefits. Since consumption does not change, the total utility change is almost constant.

According to **Figure 3**, for high earners, the preferences concerning unfunded pension schemes are dramatic. That tendency is moderated with generations and when government debt cost increases. In other words, even when the tax burden due to aging is levied on high earners' consumption and their old-age benefits are lower than in the funded pension scheme, they will rather strongly prefer unfunded pension schemes along most of the returns gap array. Additionally, when risk aversion increases, high earners' preferences for the PAYG DB pension system increases as opposed to mix pension with pension guarantee. We explain that as of high insurance embedded in the first option and lower tax burden. That conclusion is highly important mainly in times of turmoil markets.



#### **Figure 2.**

*Generations' preferences of PAYG DB vs. funded scheme in the base scenario (a* ¼ 3*).*

**Figure 3.** *High earners' preference when risk aversion increases.*

#### **5.2 PAYG DB scheme vs. mix pension scheme with pension guarantee**

**Figure 4** compares along with the base scenario the preferences for PAYG DB scheme and mix pension scheme with pension guarantee. According to the results, there is not much difference between the two possibilities (the blue line) according to low earners. The benefit level is quite similar; in both cases, there is an insurance component, and in both cases, the tax burden does not fall on this earning cohort's

*Funded Pension Schemes in Aging Societies: A Pure Economic Argument? DOI: http://dx.doi.org/10.5772/intechopen.101042*

**Figure 4.**

*The preference between PAYG DB and mix pension system.*

shoulders. As the gap between the GDP per capita and the government's interest rate decreases, the discounting factor diminishes and the attractiveness of the PAYG DB decreases. It is most interesting to understand the results for high earners, who finance the insurance components in both of these pension systems. It is significant to determine that high earners would prefer the PAYG DB pension scheme over the alternative. The reason for this is mostly the high financing cost of the guarantee. **Figure 4** depicts that when government's interest rate increases (small gap), the preference for PAYG DB increases accordingly, avoiding a higher tax burden.

As we allow differentiation in deciles' wealth growth, the income inequality increases with time. The poverty line is indexed to the GDP per capita while highearning deciles' wealth growth faster. That makes the guaranteed price to be relatively lower along with generations, which comes to realize by decreasing percent from total GDP.

In general, if the GDP per capita is higher than the government interest rate, clearly, the central planner would prefer to accumulate debt and roll it over the years, as the principal and interest rate payment decrease as percent from the GDP. Along with generations, the preferences toward the DB pension scheme tend to decrease as the average return effect increases. However, for high earners, the attractiveness between the two pension schemes is not ambiguous. High earners prefer the DB pension scheme because of the lower tax burden during the working phase. In other words, they prefer to pay lower old-age benefits than to pay the relatively high tax burden due to the guarantee.

#### **5.3 Finding an equilibrium point**

Equilibrium in pension systems is not only dealing with a question of the economy but also involves social targets [23]. Even when poverty alleviation is highly weighted among the central planner's considerations, it is not straightforward to implement mix pension system with a minimum guarantee. Although low earners only slightly prefer the unfunded scheme over the mix with the guarantee, high earners significantly prefer the unfunded scheme and avoid financing the high costs of the guarantee. Consequently, among these two options, from a wider perspective of all players, the system should be set at the PAYG DB pension system.

That conclusion is certainly relevant in an aging society having clear economic characteristics of Western countries. In other words, the potential old-age benefits for high benefits in the funded pension scheme are offset with the tax burden to fund social targets.

In this research, we show that the PAYG DB is a common equilibrium even when releasing the assumption of social targets. One can see that in **Figure 2**, wherein the gap between the GDP per capita and the government debt cost is large (1.5–1.3%), the players would prefer the PAYG DB. That simple equilibrium is also relevant when risk aversion increases or the yield's standard deviation increase. Naturally, in these situations, participants prefer safer benefits even in the cost of lower consumption during the working phase. That conclusion is most relevant when markets are not stable, for example, during the COVID-19 pandemic crisis.

More complex scenarios can be found when the gap between the GDP per capita and government debt price is narrowed. For example, in **Figure 2**, when the gap is at 1.1%, low earners prefer the unfunded pension scheme (measure of 2.8%). Similarly, high earners prefer the funded scheme (measure of 7.1%) while resisting the mix scheme (measure of 44.6% in **Figure 4**). The lack of preference of neither of the players toward the mix pension system suggests it's from the realistic equilibrium variety.

Between the unfunded and the funded pension scheme, we seek a point satisfying the players' interests, which in turn increases the chances to system




**Figure 5.**

*Finding an equilibrium point in the funded pension scheme.*

#### *Funded Pension Schemes in Aging Societies: A Pure Economic Argument? DOI: http://dx.doi.org/10.5772/intechopen.101042*

sustainability. With a given macroeconomic parameters, we seek a new mix pension system, which includes an "unfunded box," shifted from high earners to low earners, at retirement. That shift compensates low earners to excess market risk and their low abilities to hedge it. From another economic angle, high earners finance this compensation due to the characteristics of contribution rates being close to being optimal for high earners and sub-optimal for low earners [10, 11, 18]. In fact, this shift creates equilibrium as part of the "externalities" theory and alleviates the inherent socio-economic anomaly in funded pension schemes, which is in favor of high earners.

Finding the unfunded "box" size, we analyze the preferences while low earners benefit from it and high earners finance it. **Figure 5** plots the convergence process to equilibriums based on the funded scheme along with the unfunded box. We learn that even is a small amount of shifting (low box size), high earners would prefer to stay in the DB PAYG scheme. That is valid even for debt levels that are far lower than the PAYG DB base scenario. For example, in panel A, when the returns gap is 1.1%, high earners would prefer the PAYG DB even with a minimum level of the box (1% of GDP). In panel B, when the gap is shorter, the equilibrium will be set at the funded scheme with an unfunded box of 2% of the GDP. In these two cases, one can determine that the equilibrium is extremely fragile, meaning it is actually the PAYG DB scheme. In panel C, when the returns gap is at 0.7%, the suggested equilibrium is the funded scheme with an unfunded box of 3% of the GDP. From that gap level and lower, the model suggests equilibrium involving the funded pension scheme.

### **6. Discussion**

The influence of aging is perceived as an intergenerational burden [24], which increases over the years. That was used in the base arguments of the World Bank in convincing economies to shift to funded pension systems during the 1990s [25]. The motivation to converge to equilibrium is first of the government's itself, avoiding fiscal expenses on reverting and ensuring political support from all players [26].

The fiscal concerns due to the aging process are indeed intuitive; however, it might push governments to endorse funded pension schemes too fast. According to the findings, the insurance effect of the unfunded pension scheme is beneficial even at the cost of a shrinking tax base. A low-interest rate environment and a sufficient gap between the GDP per capita and the government's interest rate mostly suggest keeping unfunded pension schemes. In markets with a narrowed gap, equilibriums can be established with a funded pension scheme with some unfunded box strengthening low earners pensions at retirement. One has to mention that the equilibrium with the funded scheme is mostly fragile, where a slight change in the macroeconomic variables, will cause even high earners to prefer the unfunded pension scheme. In addition, the preferences toward unfunded schemes are strengthened in times of unstable markets.

In addition to the results, supporting a mix pension design with a risk-sharing mechanism, we count another fiscal motive of the government to avoid extensive funded scheme, surprising, as it may be sound. Altiparmakov [8] shows that CEE countries revert to unfunded pension schemes to control all sorts of contributions and taxes of their citizens. In other words, in times of financial crisis, governments wish to raise chip money, and unfunded contribution is a fast way to do that.

### **7. Conclusion**

The key feature of this research is the consideration of multiplayers in the field, as the pension system effects across generations and earning cohorts. By treating society as one single entity managing financial risks, we may lose the opportunities to disclose other interests and avoid potential equilibriums in the markets. Seeking stable pension markets is one of the top priorities of central planners, especially during the period of uncertainty in other markets due to the pandemic and global debt crisis.

While the preferences for low earners are clear toward the unfunded pension scheme, for high earners, it is most interesting to examine their preferences. Here, we consider the assumptions of mutual risk-sharing among earning cohorts, solving the inherent socio-economic anomaly in the funded scheme, which favors high earners at the expense of low earners [10, 11, 18].

The findings point that central planners must not rush for funded pension funds although societies are aging. The rush after funded pension schemes in aging markets must not be turned to way out of governments to consider multiplayers game and avoid other macro-economic parameters, such as debt level, debt price, and GDP per capita factors. Here, we mention the global trend of shifting to funded schemes even in non-aging markets, such as in Israel [27]. We find in this composition that the unfunded pension scheme should be considered as most efficient to all actors in a wide variety of macroeconomic conditions, especially when the interest rates are very low, as it is in this period.

In times of the pandemic, central planners have to minimize the possibilities of unstable pension markets and reversals. The period for itself increases the motive to find a sustainable equilibrium in the market. In addition, governments have to reconsider the frightened in the markets in these times. In our model, that comes to realize by the higher standard deviation of the market yield and higher risk aversion. Both realizations imply higher chances for equilibrium in the unfunded pension scheme. These results come despite the aging of societies.

#### **Classification**

JEL: D14, E21, E61, G11, G18, G22, G32, H23

### **Author details**

Ishay Wolf\* and Smadar Levi The Ono Academic College, Tel Aviv, Israel

\*Address all correspondence to: ishay.wolf1@gmail.com

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Funded Pension Schemes in Aging Societies: A Pure Economic Argument? DOI: http://dx.doi.org/10.5772/intechopen.101042*

## **References**

[1] Clements B, Dybczak K, Gaspar V, Gupta S, Soto M. The fiscal consequences of shrinking and ageing populations. Ageing International. 2018;**43**:391-414. DOI: 10.1007/s12126-017-9306-6

[2] Feldstein M. Structural reform of social security. Journal of Economic Perspectives. 2005;**19**(2):33-55. DOI: 10.1257/0895330054048731

[3] Holzmann R, Hinz R. Old-Age Income Support in the 21st Century: An International Perspective on Pension Systems and Reform. Washington, DC: The World Bank; 2005. DOI: 10.1596/ 0-8213-6040-X

[4] OECD. Pension at Glance 2020: OECD and G20 Indicators. Brussels: OECD Publishing; 2019. DOI: 10.1787/ b6d3dcfc-en

[5] Barr N, Diamond P. Reforming pensions: Principles, analytical errors and policy directions. International Social Security Review. 2009;**62**(2):5-29. DOI: 10.1111/j.1468-246X.2009.01327.x

[6] Ebbinghaus B. Multi-pillarisation remodelled: The role of interest organizations in British and German pension reforms. Journal of European Public Policy. 2019;**26**(4):521-539. DOI: 10.1080/13501763.2019.1574875

[7] Milev J. The pandemic crisis and the resulted risks for the fully funded pension funds in Central and Eastern Europe. Research Papers of UNWE. 2021;**2**(1):203-216

[8] Altiparmakov N. Another look at causes and consequences of pension privatization reform reversals in Eastern Europe. Journal of European Social Policy. 2018;**28**(3):224-241. DOI: 10.1177/0958928717735053

[9] Wolf I, Ocerin JMC. The transition to a multi-pillar pension system: The

inherent socio-economic anomaly. Journal of Financial Economic Policy. 2021. DOI: 10.1108/JFEP-07-2020-0162

[10] Wolf I, Del Rio LCL. Fundedcapitalized pension designs and the demand for minimum pension guarantee. Public and Municipal Finance. 2021;**10**(1):12-24. DOI: 10.21511/pmf.10(1).2021.02

[11] Wolf I, Del Rio LCL. Benefit adequacy in funded pension systems: Micro-simulation of the Israeli pension scheme. International Journal of Economics & Business Administration. 2021;**9**(2):143-164. DOI: 10.10.35808/ ijeba/694

[12] Feher C, Bidegain I. IMF Report. July 2020

[13] Gruber J, Wise DA. Social Security and Retirement Around the World. Chicago: University of Chicago Press; 2008. DOI: 10.7208/9780226309996

[14] Heneghan M, Orenstein MA. Organizing for impact: International organizations and global pension policy. Global Social Policy. 2019;**19**(1-2):65-86. DOI: 10.1177/1468018119834730

[15] Wolf I. Political stress and the sustainability of funded pension schemes: Introduction of a financial theory. Journal of Risk and Financial Management. 2021;**14**(11):1-12. DOI: 10.3390/jrfm14110525

[16] Ortiz I, Durán-Valverde F, Urban S, Wodsak V. Reversing Pension Privatizations: Rebuilding Public Pension Systems in Eastern Europe and Latin America. Geneva: International Labour Organization; 2018. DOI: 10.2139/ssrn.3275228

[17] Cipriani G, Fioroni T. Social security and endogenous demographic change: Child support and retirement policies\*.

Journal of Pension Economics and Finance. 2021:1-19. DOI: 10.1017/ S1474747220000402

[18] Wolf I, Del Rio LCL. Pension reforms and risk sharing cycle: A theory and global experience. International Journal of Economics & Business Administration (IJEBA). 2021;**9**(1): 225-242

[19] Knell M. The optimal mix between funded and unfunded pension systems when people care about relative consumption. Economica. 2010; **77**(308):710-733. DOI: 10.1111/ j.1468-0335.2009.00797.x

[20] Ando A, Modigliani F. The "life cycle" hypothesis of saving: Aggregate implications and tests. The American Economic Review. 1963; 53(1): 55-84. Available from http://www.jstor.org/ stable/1817129 [Retrieved: February 23, 2021]

[21] Masten E, Thorgesen O. Designing social security—A portfolio choice approach. European Economic Review. 2004;**48**(4):883-904. DOI: 10.1016/j. euroecorev.2003.09.006

[22] Aaron H. The social insurance paradox. Canadian Journal of Economics and Political Science. 1966; **32**(3):371-374. DOI: 10.2307/139995

[23] Barrientos A. Social protection in Latin America: One region two systems. In: Cruz-Martínez G, editor. Welfare and Social Protection in Contemporary Latin America. London: Routledge; 2019

[24] Bohn H. Should public retirement plans be fully funded? Journal of Pension Economics and Finance. 2011; **10**(2):195-219. DOI: 10.1017/ S1474747211000096

[25] Noy S. Healthy targets? World Bank projects and targeted health programmes and policies in Costa Rica, Argentina, and Peru, 1980–2005.

Oxford Development Studies. 2018; **46**(2):164-183. DOI: 10.1080/ 13600818.2017.1346068

[26] Guardiancich I, Guidi M. The political economy of pension reforms in Europe under financial stress. Socio-Economic Review. 2020:mwaa012. DOI: 10.1093/ser/mwaa012

[27] Lurie L. Pension privatization in Israel. In: Paz-Fuchs A, Mandelkern R, Galnoor I, editors. The Privatization of Israel. New York: Palgrave Macmillan; 2018. DOI: 10.1057/978-1-137-58261-4\_5

#### **Chapter 9**

## Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score

*Hardo Holpus, Ahmad Alqatan and Muhammad Arslan*

#### **Abstract**

The study aimed to build a risk metric for finding the lower boundary limits for Altman's z-score bankruptcy model. The new metric included a volatility of Altman's variables and predicted the riskiness of a firm bankrupting in adverse situations. The research examined whether the new risk metric is feasible and whether it provides satisfying outcomes compared to Altman's z-score values during the same period. The methods to conduct the analysis were based on Value at Risk methodology. The main tools used in constructing the model were Monte Carlo simulation, Lehmer random number generator, normal and t-distribution, matrices and Cholesky decomposition. The sample firms were selected from FTSE 250 index. The important variables used in the analysis were all Altman's z-score variables, and the period under observation was 2001–2007. The selected risk horizon was the first quarter of 2008. The first results were promising and showed that the model does work to the specified extent. The research demonstrated that Altman's z-score does not provide a full and accurate overview. Therefore, the lower bound risk metric developed in this research, produces valuable supplementary information for a well-informed decision making. To verify the model, it must be back- and forward tested, neither of which was carried out in this research. Furthermore, the research elaborated on limitations and suggested further improvement options for the model.

**Keywords:** risk metric, Altman's z-score, Value at Risk, bankruptcy prediction, Monte Carlo simulation, Lehmer RNG, Cholesky decomposition

#### **1. Introduction**

This study investigates a perspective of using a lower bound risk metric on Altman's z-score variables to determine the lower limits for Altman's z-score.

The lower bound risk metric is intended to help in assessing a default risk during an economic distress or in situations of extreme volatility and calamity of business. Altman's model uses values for its variables from a past year's financial statements. Based on these values, a result of Altman's formula shows whether a company is in a distressed zone and whether a bankruptcy is expected in the next two years. In situations where financial results of a company are very volatile, Altman's formula may show a good result, but the result and a health of the company may degrade very rapidly because of a volatility inherent to the company. The volatility may be a result of several factors, including a bad management or a cyclicity of a business. In such instances, Altman's z-score alone is not the best option for depicting the

riskiness of the business. Therefore, it needs another metric which considers the volatility in the variables and which can estimate the risk of having a potentially sharp drop in Altman's z-score value. The lack of a proper risk estimation in Altman's z-score model prompted this study to examine methods to construct the lower bound risk metric.

The risk metric is extensively based on the Value at Risk (VAR) methodology. The VaR is used to estimate a maximum loss of value over a certain period of time with a determined probability level. The VaR methodology was developed by J. P. Morgan during the 1980s and the 1990s to measure the riskiness of their assets required by the Basel I set of international banking regulations [1]. Since then, it has become the central theory in a risk management in banking, insurance and asset management. The stress testing is a complement to the VaR and it looks at the riskiest but yet plausible events. The use of a model to predict Altman's z-score variables in distress situations were not identified in the literature. Previous studies mainly suggest new models or other improvements to Altman's formula. Altman's zscore is giving a default risk figure based on the historical information. It does not try to predict extreme situations in which a firm's position may deteriorate much faster than the equation is able to predict based on the historical information.

The aim of this study is to use the VaR methodology for variables in Altman's zscore to analyse whether it is a helpful risk metric to describe the depth of variance and to add a factor of predictability. To do that, Altman's z-score results for the FTSE 250 companies during the crisis of 2008 are compared with the results produced by the new risk metric using the pre-crisis data for Altman's formula. The assumption is that Altman's variables in a modelled distress situation help to better understand the depth of insolvency risk a company possesses. The study uses London Stock Exchange index FTSE 250 companies to carry out the empirical research. The panel data from Capital IQ is used for the Monte Carlo simulation to model the base distribution for Altman's variables. The simulated values for the variables are used to form the new risk metric. It gives the lower bound values for Altman's z-scores, which inform the underlying volatility and the worst-case scenarios with defined confidence levels. To confirm its validity, the results from the risk metric and stress tests should be back- and forward tested. The analysis reveals the limitations in the study, but it also points out options for further improvement.

#### **2. Literature review and research hypotheses**

#### **2.1 Bankruptcy prediction models**

Bankruptcy prediction models can be classified into three general groups [2]: the statistical models [3–9], the Artificial Neural Network (ANN) models [10–13] and the kernel based learning models [14–16].

The statistical methods have the longest application history and are the most common still today. There are many methods, models and their variations. Some of the prevalent methods are simple ratio analysis [5], univariate analysis [4], multivariate analysis [3], logit method [7, 9]. Soon after Beaver's research, Altman [3] used a multivariate discriminant analysis (MDA) and was able to get more accurate results and predict the failure over a longer time horizon. His model received wide acceptance and since then, the multivariate approach has been broadly used to develop bankruptcy models for different sectors, indexes, countries and so on. Even when the artificial intelligence and the computer learning methods achieve better results, the statistical models are fairly simple to understand and have continued to prove to be highly reliable.

*Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*

#### **2.2 Altman's Z-score bankruptcy model**

The aim of this research is not to be using the best model possible for the lower bound risk metric calculations, but rather to use the most common and widely used bankruptcy model for which it is appropriate to adapt. Taking it into account, the most commonly used model is Altman's z-score bankruptcy model [17].

He introduced five financial ratios with highest default prediction power: *working capital to total assets x*ð Þ<sup>1</sup> , *retained earnings to total assets x*ð Þ<sup>2</sup> , *earnings before interests and taxes* (EBIT) *to total assets x*ð Þ<sup>3</sup> , *market value of equity to book value of total liabilities x*ð Þ<sup>4</sup> , *sales to total assets x*ð Þ<sup>5</sup> :

$$\text{Z Score} = \mathbf{1.2x\_1} + \mathbf{1.4x\_2} + \mathbf{3.3x\_3} + \mathbf{0.6x\_4} + \mathbf{1.2x\_5} \tag{1}$$

Altman [3] divided the formula's results into three categories. The firms which achieve a score above 2.99 are considered to be in a safe area and far from default; results in the range of 1.81–2.99 are considered to be in a grey zone and need attention; results below 1,81 are in a distress zone and firms have a risk to become insolvent.

Altman found his model to be accurate over 70% of times, predicting bankruptcy two years before a default, and 95% accurate one year before a default [3]. In 6% of times the model predicted a default for a survived company. The accuracy rate diminished increasingly after two years. Heine [18] investigated the accuracy of Altman's model over a period of 31 years from 1968 to 1999 and found that the model was still working fairly well. His findings showed over 80% accuracy in defaults one year before the actual event. Regardless of the results, Hillegeist et al. [19] argue that book value based models are by design limited in predicting defaults, as annual reports are prepared on a going-concern basis. There have been other numerous comparisons which find contradictory as well as supportive evidence for using book and financial ratios, for instance, the works of Balcaen and Ooghe [20], Appiah et al. [21], Mousavi et al. [22], Charalambakis [23], Agarwal and Taffler [24].

Commonly, the insolvency models use annual data, but the accuracy of book value models could be increased by using a quarterly data according to [25]. They did not find significant difference between the quality of quarterly and annual reports. The multivariate discriminant analysis conducted in their research provided more accurate and timely results using unaudited quarterly reports instead of just using annual reports.

#### **2.3 Value at risk**

VaR has three main methods of calculation: historical, variance–covariance and Monte Carlo (MC) method [26]. Each of the methods has its own advantages and drawbacks. The historical method is easy to apply for a collected data and it does not need a distributional assumption. The historical simulation assumes that the future events can be described by past events and that recent past trend represents the near future fairly accurately [1]. Such assumption is better than no assumption, but the historical data may consist of events that are not relevant for the future and thus should be treated with care. The variance–covariance method is easy to compute and use to manage portfolio risk. On the other hand, the biggest deficiency for the variance–covariance method is the failure to capture fat tails in the distribution [27]. A normal distribution creates a bias to underestimate the true VaR. Monte Carlo (MC) method overcomes most of the mentioned deficiencies, but has some of its own. MC method is flexible and can be used for shorter and longer time periods, whereas it is considered to be the most accurate method of the three for longer time periods ([1], p. 270). Another advantage of MC is that it can be applied to nonnormal distribution, which more accurately describes many of the practical applications [28]. The two main setbacks are the model risk and the time to compute MC simulations ([1], p. 270).

Breuer [29] stresses the setbacks that arise due to the assumptions used in calculating VaR. The first issue he raises is the assumption that the market conditions are static throughout the future. Such assumptions are correct if the future market characteristics are repeating itself and are similar to the present or historical values. It should also be noted that every risk prediction measure has to cope with this dilemma in some scale, it is not specific to VaR. The second major problem that Breuer mentions is the assumption of data following a normal multivariate distribution in many VaR models. This holds true only in some cases and in majority of situations produces an imprecise outcome. He demonstrates such a case with a very illustrative example about 1987 market crash. The crash had a fall in stock prices between 10 to 20 standard deviations. Considering that seven standard deviations fall in a normal multivariate distribution would happen on average one day in three billion years, the assumption of normality in this particular case seems exceptionally poor. Nonetheless, VaR is flexible enough to allow to use more distributions than just a normal distribution, although the prevalence is to use a normal distribution for its simplicity and ease of use. Apart from the previously mentioned setbacks, Krause [30] discusses the limitations in choosing a confidence level and a horizon. The longer the horizon the bigger the variance and the less reliable is the outcome. A selected confidence level also sets the quantile value, beyond which VaR does not describe the losses. To understand the most extreme scale of losses, there are supplementary methods such as the expected shortfall (ES), maximum loss or other stress tests to assist with this information [1]. However, the applicability of stress tests depends on the characteristics of the data, which may limit their suitability in certain analysis.

#### **2.4 Stress testing**

Nearly all models try to predict the bankruptcy based on the trend in the business operations, finances and other accessible information. Not so much has been done to investigate hypothetical stress or worst-case scenarios that happen to every business during its lifecycle.

Stress testing is basically a complementary measure for VaR to capture the extreme losses in the tails. There is no one to tell the probability and depth of such extreme scenarios. An important constituent is the plausibility of such scenario. Although such scenarios are rare, they do happen, but the case should neither be too shallow nor almost impossible in terms of severity. The main difference between VaR and historical stress testing methods is the time period. VaR uses relatively short time periods, usually from a day up till one year, and some VaR models weigh recent time more heavily. Historical stress testing, to the contrary, uses periods of distant past and includes market crashes and periods of extreme volatility. Breuer [29] examines four types of common stress testing methods: historical, expected shortfall, maximum loss and Monte Carlo method. He finds that the choice of the methods depends on the aim and data, but acknowledges that the Monte Carlo method performed relatively better than other methods in many observed cases.

#### **2.5 Predictive power and model verification**

A model risk always remains, but it can be minimized using back- and forward testing for a model validation. Back-testing compares the actual outcomes with the *Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*

predicted estimate of VaR before the sample period [26]. Forward testing compares actual outcomes with an estimate after the sample period. If the losses exceed VaR estimate more than the set confidence level, the model is not accurate and needs modification [31]. Halilbegovic and Vehabovic [26] highlight that the values which exceed VaR should be equally distributed along the horizon and be independent. The first and the most referenced test for back-testing is Kupiec [32] "proportion of failures" coverage test. Banking industry is also using the "traffic lights" test published by the Basel Committee [33].

#### **2.6 Research hypotheses**

The lower bound risk metric is expected to provide a lower limit for Altman's zscore within the selected confidence level. It provides a precautionary gauge and includes the measure for downside volatility. The supplementary information from risk metric gives a more informative decision-making tool and indicates the weaknesses of the subject firms on a more extensive scale. To test the hypotheses, Altman's z-score values of FTSE 250 firms during the selected 2008 recession period are compared with the calculated pre-crisis risk metric values. If the risk metric is reliable, there should not be more outlier firms than the confidence limit allows. It is determined whether the simulated pre-crises risk metric values are providing relevant and sufficient information besides the standard Altman's z-score values to be a practical risk measure. The research hypotheses are as follows:

*H*0- The lower bound risk metric does not provide the lower limit for Altman's zscore within the selected confidence level<sup>1</sup> .

*H*1- The lower bound risk metric provides the lower limit for Altman's z-score within the selected confidence level.

#### **3. Methodology**

This study investigated a perspective of using the lower bound risk metric on Altman's z-score variables to determine the lower limit for the score. The core methodology used in this research is based on Value at Risk (VaR) and Altman's zscore bankruptcy model. VaR methodology has been widely adopted in measuring financial and market data figures and to report a business or market risk. Using the verified and tested solution on the same type of data and in a similar way, although in a different setting, assured the validity of the approach. The different setting represented the use of VaR approach on Altman's formula to calculate the lower bound risk metric.

The research was carried out on a sample of firms from FTSE 250 index. The index is consisting of similar midsize companies, whose values and operations tend to follow well the fluctuations in the economy. Therefore, the firms were good subjects to estimate the performance of a lower bound risk metric.

The data was retrieved from S&P Capital IQ [34] database; primarily from financial statements. Annual financial statements did not give many data points for correct volatility evaluation. To compensate the lack of data points, quarterly financial reports were used instead of the annual reports. Consequently, it also provided the basis for a risk horizon to be one quarter ([28], p. 216; [1], p. 311). The risk horizon gave the results to which the risk metric was compared. Quarterly

<sup>1</sup> The typically selected confidence levels are 95% or 99%. Both levels are used in this study for comparative reasons.

reports provided more data to analyse and this became important considering the limited information available.

The study by Nallareddy et al. [35] mentioned that the Financial Conduct Authority (FCA) in the UK did not require quarterly reporting before 2007. They pointed out that it was mandatory between 2007 and 2014, after which it was overruled by the EU Transparency Directive in 2013. It ruled FCA to stop requiring mandatory quarterly reporting from 2014. Relatively few FTSE 250 companies reported quarterly results voluntarily and on a consistent basis before millennia. However, it became more common thereafter, regardless of legal requirements. From year 2001 onwards, there was enough data for the study, and therefore it was a suitable starting point for the data collection. The data collection period had to be relatively long to provide a sufficient amount of data for the time series analysis. It was also preferable to have the risk horizon during a period of high volatility to examine whether the risk metric model actually worked, or it needed to be modified before further testing. The tipping point of financial crisis was in 2008, which made it a suitable period for the risk horizon. Hence, the period for collecting data to calculate the risk metric was from [2001–2007] and the risk horizon was the first quarter of 2008. The firms listed during that time period are not the same as the firms listed in FTSE 250 in July 2017, which were the firms used in this study. There have been changes in the index, when comparing the companies listed during the seven-year period to July 2017. There are firms, which left the stock market, went bankrupt or were simply excluded from the FTSE 250 list. Nonetheless, in order to allow future data analysis of the same firms in later periods than 2008, the data was collected from firms that were listed in FTSE 250 from 1st of January 2001 until 1st of July 2017. This decision drew some limitations as it excluded firms that could have potentially offered a valuable information. On the other hand, the selected time period and firms provided an opportunity for forward testing and some limited scale back-testing for further studies.

Seven years of quarterly data provided 28 data points per firm. Altman's z-score was not used for finance firms as this sector is known to be particularly leveraged, regulated and consists of many disguised risks. Therefore, all finance firms were excluded from the list of FTSE 250 for a data analysis purposes, which left 176 firms. The z-score was calculated for all 28 quarters for the 176 firms. Many did not report quarterly results or did it only for short periods and inconsistently, which was not enough for calculating acceptable standard deviation from the variables. Also, many of the observed firms were not listed during the examined period and thus, did not have the data. Therefore, only firms that had enough data to calculate Altman's zscore for minimum 20 quarters, were included and the rest were excluded. Twenty quarters was an arbitrary figure that provided just enough data points to make meaningful analysis. The limitation of minimum 20 quarters of data left 78 firms with enough quarterly data to conduct the research. Outliers were identified and revealed in a box plots diagram at the end phase of the data analysis. It illustrated well the serious deviations in the data, which could be further investigated to find the root causes of such irregularities.

The examined variables of Altman's z-score were total assets, total liabilities, working capital and retained earnings from the statement of financial position; sales and EBIT from the income statement; market value of equity was received from market data. To calculate the bankruptcy z-score, the quarterly data, which was derived from income statement, had to be annualized. To do that, the results from previous three quarters for sales and EBIT were combined. Calculations of the average value and standard deviation of each of Altman's variables for each firm gave the basic data to derive the risk metric using Monte Carlo simulation.

The first assumption about Monte Carlo simulation was that the base distribution is a suitable proxy for the nature of the data [36]. Monte Carlo simulation was *Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*

the most flexible of previously reviewed methods and it could assume any distribution as its base distribution [27]. A typical simplifying assumption is that the variable is independent and identically distributed (i.i.d.) and that it follows a normal distribution. Such assumptions are often valid, especially in the case of big sample sizes [36]. However, testing a data for normality is a prerequisite for a more accurate model. Samples of all seven variables were plotted on graphs using histograms and Anderson-Darling normality test. Illustrations of both graphs depicting the distribution of standardized total assets can be found in Appendix A. Although graphical interpretation is subjective, it allows to draw fast and simple conclusions. A sample of histograms and Anderson-Darling tests suggested that the variables did not fulfil the requirements for normality and did not follow a normal distribution. A normality requirement was fixed simply by Monte Carlo simulation, where the generated iterations created a big enough sample size to fulfil the normality requirement. Histograms suggested that the distributions were not normal, but more likely followed a fat-tailed leptokurtic t-distributions. Using a multivariate Student t-distribution for the model, all the marginal distributions had to follow the same degrees of freedom parameter. For the analysis in this research both distributions were used. Firstly, it was described the process of getting risk metrics using normal distribution and secondly, using the Student t-distribution as there were only slight differences in implementing a t-distribution compared to a normal distribution.

It was assumed that the variables follow a stochastic process referred to as Geometric Brownian Motion ([1], p. 309). It was stochastic in a sense that changes in the variance were random and did not depend on a past information [37]. The equation for Monte Carlo simulation with Geometric Brownian Motion is

$$
\mathfrak{x} = \mathfrak{\mu} + \mathfrak{o}\mathfrak{z} \tag{2}
$$

Where *x* is the value of the variable, μ is the expected value of the variable, σ is the standard deviation, and *z* is the standard score expressed as

$$z = \frac{x - \mu}{\sigma} \tag{3}$$

To make the variance change randomly, Monte Carlo simulation method with a designed random number generator was used. Random numbers were generated repeatedly a large number of times for each of the seven variables. This created a typical Monte Carlo simulation, where there were a number of variables that could be easily tested in a simulation, but which may have lacked the data or resources to test experimentally. Simulations are simple to create using random numbers. There are two random number generating techniques. The non-deterministic, in which case each time the random number is generated, it exhibits a different output number. The deterministic technique, which creates pseudo random numbers that keep their output numbers fixed. It is fixed by using a seed number. The latter makes analysis easier by enabling to run the simulations repeatedly and allows to modify and recreate iterations.

For the simulation, a Lehmer random number generator (RNG) with different seed for each variable was used to generate the random integers. Lehmer RNG belongs to a group of linear congruential generators. It generated a uniformly distributed random numbers k between 0 and 1. The equation for Lehmer RNG is

$$k\_i = (ak\_i) \text{mod } m \tag{4}$$

*a*≥0 is the constant multiplier.

*m* is the modulo m.

*ki* is the random integer.

Lehmer RNG generated pseudo random numbers in the range of 0, ½ � *a* � 1 , where a was the Mersenne prime. Dividing that range with Mersenne prime gave a uniformly distributed probability values ε<sup>i</sup> in the range of 0, 1 ½ �*:*

$$\mathbf{e}\_{i} = \frac{k\_{i}}{a} \tag{5}$$

ε*<sup>i</sup>* is the pseudo random number in the range of 0, 1 ½ �*:*

*ki* is the generated pseudo random integer in the range of 0, ½ � *a* � 1 *: a* is the Mersenne prime

The generated probability values were fed into the inverse base-distribution selected for Monte Carlo simulation [27]. The inverse function generated the statistical standard score values. Hence, the formula (III.1) could also be written in a form

$$\infty = \mu + \sigma \Phi^{-1}(a) \tag{6}$$

Monte Carlo simulation was used to create 15,000 iterations of described standard score values for each of the seven Altman's z-score variables. The independent iterations of standard scores had to be adjusted for the correlation between Altman's variables. The complexity of calculations in multivariate analysis for correlation adjustments required the use of matrices [37]. Several matrix calculations were performed for each Altman's variable and this was done for each firm under observation. The matrices performed were correlation matrix, variance matrix, variancecorrelation matrix, covariance matrix, Cholesky decomposition matrix and the result matrix [37]. The result matrix provided the product of correlated standard scores and the standard deviation of each variable. The results showed how much the expected mean of each variable can deviate. Therefore, when the expected mean was added to each of the calculated results, it provided variable values that followed the chosen base distribution for Monte Carlo simulation. Finally, the produced values were inserted to Altman's formula to produce 15,000 Altman's z-scores for each of the observed firm. The selected confidence level determined the lower quantile value, which became the lower bound risk metric for Altman's z-score

$$\mathbf{Z} < \mathbf{Z}\_{\mathrm{h}} = \mathbf{Z}\_{\mathrm{h}\mathrm{u}} \tag{7}$$

where Z is the simulated iterations, Zh is Altman's z score for risk horizon h. The lower bound risk metric for Altman's z-score is noted as Zhα. It marks the α quantile of the simulated iterations for risk horizon h.

#### **4. Data and analysis**

This study examines how the methods and formulas brought out in Methodology are applied to retrieved data. Because a simulation generates a large amount of data, it is not presentable in such a scale. Instead, specific examples are presented to provide comprehension on the subject matter.

#### **4.1 Random number generator**

The Lehmer random number generator is used to generate pseudo random numbers k*<sup>i</sup>* with chosen seed numbers

*Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*

$$k\_i = (ak\_0)\text{mod }m\tag{8}$$

For instance, the Lehmer RNG with parameters *<sup>a</sup>* <sup>¼</sup> 231 � 1, *<sup>m</sup>* <sup>¼</sup> <sup>7</sup><sup>5</sup> , *k*<sup>0</sup> ¼ 231 and 15,000 iterations generate randomly distributed numbers plotted on **Figure 1**.

Park and Miller, in 1988, suggested specific parameters for *<sup>a</sup>* <sup>¼</sup> 231 � <sup>1</sup> (Mersenne prime) and *<sup>m</sup>* <sup>¼</sup> <sup>7</sup><sup>5</sup> . The random integer requires an initial value k0. It is typically called a seed value. The seed value is used to run the initial random number. To randomize variance, each of the seven Altman's variables need a seed number to run its own 15,000 iterations. Thus, each variable is assigned a seed number. The seed number does not have to be random, but for Lehmer RNG, the seed needs to be a coprime to the modulus m. A coprime is such a number that the only number which divides the coprime and the modulus is 1. The seven variables are assigned the coprime seeds as follows: total assets 231, total liabilities 331, working capital 431, retained earnings 531, sales 631, EBIT 731 and market value of equity is assigned a seed value of 831. When all the parameters are applied to the above mentioned random number generator, it generates pseudo random numbers in the range of 0, ½ � *a* � 1 , where *a* is the Mersenne prime.

#### **4.2 Probability distributions**

In order to derive a probability function, the generated random numbers within the range of 0, ½ � *a* � 1 need to be divided by the Mersenne prime. The division gives pseudo random numbers ε<sup>i</sup> in the range of 0, 1 ½ �*:*

$$\mathbf{e}\_{i} = \frac{k\_{i}}{a} \tag{9}$$

The formula creates numbers, which follow a uniform distribution. For instance, the uniform distribution for total assets with a seed 231 is depicted in **Figure 2**.

The generated pseudo random numbers are uniformly distributed and can be fed into the inverse standard normal cumulative distribution Φ�<sup>1</sup> ð Þ *α* to provide standard score statistic z. Each outcome of the cumulative distribution gives a standard score, which is the corresponding quantile to any given value of *α*. Φ�<sup>1</sup> ð Þ *α* , which produces the standard score, shows how common are samples that are less

**Figure 1.** *Random numbers generated by Lehmer RNG.*

**Figure 2.** *Uniform probability distribution.*

than or equal to this value. As a reminder, the standard score statistic and Altman's z-score are not the same and should not be confused. **Figures 3** and **4** depict the cumulative standard normal distribution function and the standard normal probability density function respectively.

Setting *<sup>z</sup>* <sup>¼</sup> <sup>Φ</sup>�<sup>1</sup> ð Þ *α* , then

$$\infty = \mu + \sigma \Phi^{-1}(a) \tag{10}$$

When *α* is replicated 15,000 times, *x* takes values along a normal distribution. Depending on the chosen significance level *α*, the *α* quantile is the value of *x* below which the *x* is not expected to go with a confidence level of 1 � *α*. For instance, Workspace Group PLC value of assets *x* can be described by a function *x* ¼ <sup>619</sup> <sup>þ</sup> <sup>229</sup> <sup>∗</sup> <sup>Φ</sup>�<sup>1</sup> ð Þ *α* . Replicating *α* randomly by 15,000 times and taking a lower 5% quantile of the distribution gives an asset value 259 M. Therefore, in theory, it

**Figure 3.** *Cumulative standard normal distribution function (CDF).*

*Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*

**Figure 4.** *Standard normal probability density function (PDF).*

means that with 95% confidence, the value of assets is not expected to go below 259 M.

The function can be written in another way

$$\infty = \sigma \Phi^{-1}(a) - \mu \tag{11}$$

in which case, the *x* now represents a risk metric similar to Value at Risk and is already presented in absolute terms.

It is possible to plug the generated probability values directly to derive the values for each variable. But doing so, a correlation between variables is totally ignored. The correlation has an impact on the deviation, and it characterizes how each deviation is correlated to the deviations in other variables. Correlation is simple to calculate when using only two variables. In this situation, it is seven variables, which requires a matrix based methodology. The process of producing a correlation between the variables can be divided into six separate calculation steps, which make it easier to understand.

#### **5. Correlation matrices for simulated variables**

The first step is to create a correlation matrix. Each individual correlation between variables is entered into the matrix table. The diagonal is always 1, because the correlation of the variable with itself is 1. Normally the upper area from the diagonal is left empty, but in this case, it is filled for computational reasons. The covariance between each variable and the correlation coefficient are calculated. An example of resulting matrix for Workplace Group PLC is illustrated in **Table 1**.

$$\sigma\_{\mathbf{x},\mathbf{y}} = \frac{\sum\_{i=1}^{N} (\mathbf{x}\_i - \bar{\mathbf{x}}) \left(\mathbf{y}\_i - \bar{\mathbf{y}}\right)}{(n-1)} \tag{12}$$

$$
\rho\_{\mathbf{x},\mathbf{y}} = \frac{\sigma\_{\mathbf{x},\mathbf{y}}}{\sigma\_{\mathbf{x}}\sigma\_{\mathbf{y}}} \tag{13}
$$


**Table 1.** *Correlation matrix.*

The second step is to create a variance matrix as shown in **Table 2**. It is achieved by simply adding the standard deviations of each variable diagonally.

The third step is to produce a variance-correlation matrix. It is the product of the correlation and variance matrices and it can be calculated only when the matrix is positive definite. The equation to multiply the above matrices is illustrated and the calculated matrix is presented in **Table 3**.


#### **Table 2.**

*Variance matrix.*


#### **Table 3.**

*Variance-correlation matrix.*

*Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*

$$
\sum = \begin{pmatrix}
\rho\_{1,1} & \rho\_{1,2} & \cdots & \rho\_{1,d} \\
\rho\_{2,1} & \rho\_{2,2} & \cdots & \rho\_{2,d} \\
\vdots & \vdots & \ddots & \vdots \\
\rho\_{d,1} & \rho\_{d,2} & \cdots & \rho\_{d,d}
\end{pmatrix} \begin{pmatrix}
\sigma\_{1,1} \\
& \sigma\_{2,2} \\
& & \ddots \\
& & & \sigma\_{d,d}
\end{pmatrix} \tag{14}
$$

The fourth step is to produce variance–covariance matrix. It is the product of the variance-correlation matrix and the variance matrix and the results are presented in **Table 4**.

The fifth step is to produce a Cholesky decomposition [38]. It is produced by taking a square root of variance–covariance matrix as illustrated in **Table 5** [39].

The methodology of step one to step five is similar to producing a correlated bivariate distribution from two samples of uncorrelated normal variables [40]. It is more straightforward and makes the calculations of above matrices easily understandable. For instance, the first sample of uncorrelated variable is produced as described above by feeding a uniformly distributed random number into an inverse standard normal cumulative distribution Φ�<sup>1</sup> ð Þ *α* to arrive at standard score statistic *zi*. When applying the standard score into an equation for the first variable, it produces as follows

$$\mathbf{x}\_1 = \mu\_1 + \sigma\_1 \mathbf{z}\_1 \tag{15}$$


#### **Table 4.**

*Variance–covariance matrix.*


#### **Table 5.** *Cholesky decomposition matrix.*

To make the second sample of uncorrelated variable to correlate with the first one, the variables need to be combined. The combining factor is the correlation between both z-scores, *z*<sup>1</sup> and *z*<sup>2</sup> [40]. The resulting for the second variable is presented below

$$\mathbf{x}\_2 = \mu\_2 + \sigma\_2 \left(\mathbf{z}\_1 \rho + \mathbf{z}\_2 \sqrt{\mathbf{1} - \rho^2} \right) \tag{16}$$

Instead of two variables, the five step calculations ending with Cholesky decomposition produces a correlation between seven variables.

Finally, the sixth step produces the results for each variable by summing the product of Cholesky matrix and the matrix of standard score iterations to the mean of each variable. The results for Workspace Group PLC are displayed in **Table 6**.

Altman's z-score value in the table is calculated simply by using Altman's bankruptcy formula and the calculated variables in the row. Given the 15,000 iterations, the 5% and 1% quantile are the smallest 749th and 149th values respectively. In the


## **Table 6.**

*Result matrix.*

**Figure 5.** *Comparison of leptokurtic and standard normal distribution.*

*Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*

example about Workplace Group PLC, Altman's z-score 5% and 1% quantile values are �0.23 and �6.86 respectively.

As stated earlier, the variables are following a fat-tailed distribution that is similar to a leptokurtic t-distribution. **Figure 5** is illustrating the comparison of a leptokurtic and a standard normal distribution that are created by using the Lehmer random number generator and seed 231. The entire process for a t-distribution is very similar to that of a normal distribution which was described above. The only difference is that the standard z-score is replaced with t, which is the statistic for tdistribution. To arrive at standardized Student t values, the independent standard Student t simulations are multiplied by ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi *<sup>ν</sup>*�1ð Þ *<sup>ν</sup>* � <sup>2</sup> <sup>p</sup> , where *<sup>ν</sup>* is the degree of freedom ([28], p. 228).

There are several ways to estimate degrees of freedom. Suggested methods for multivariate Student t-test are the *maximum likelihood estimation* and *method of moments* [41]. Both methods are estimating the parameters of the statistical model. For this analysis, the degrees of freedom are estimated approximately. Considering that the unknown parameters are the seven observations and the two known parameters are the mean and variance, the difference of it is the five unknown parameters, which are also the five degrees of freedom.

#### **6. Results**

Before presenting the calculated risk metric values, it is good to understand how many of the 78 firms were already in distressed zone in terms of Altman's z-score limits. The distressed zone for manufacturing firms was defined by Altman as a score lower than 1.81. This study was based on the standard formula of Altman's zscore and it was used for all the companies despite their primary industry classification. The industry classifications for the 78 firms are displayed in **Table 7**.

All energy, industrials and materials companies were considered to be manufacturing companies and the rest in the list were non-manufacturing. A similar approach has been applied by many researchers and organizations, including the research by Miller [17] and the reports by market intelligence provider S&P Capital IQ. **Table 8** provides an overview of distressed firms in each four-quarter period.


**Table 7.** *Industry classifications.*


## **Table 8.**

*Distressed firms.*

Around 33% of the 78 companies experienced the distressed period for longer than one year according to Altman bankruptcy model. It is also known that the type two error, which classifies firm as bankrupt when it does not go bankrupt, is around 15–20%. In that respect, the 33% figure is too high. The reason could be that the standard Altman's z-score is not that accurate for nonmanufacturing firms, which had 42% of firms in distressed zone. Whereas the number of distressed firms for manufacturing industry was 19%. Anyhow, this could be investigated further.

Returning to calculated Altman's z-score limits. In **Table 9**, both, the 95% and 99% confidence level limits from the normal and t-distribution are used to compare them with the actual first quarter results of 2008. In addition, the two calculated Monte Carlo (MC) limit values are compared with 95% and 99% confidence level limits, which were calculated from the actual quarterly Altman's z-scores from [2001–2007]. **Table 9** provides a good comparison of the effectiveness of the calculated MC limit values. From 78 firms, only 64 of them reported first quarter results in 2008. The outliers section on the left hand of **Table 9** shows how many of the 64 firms crossed the applied confidence level limits. The failure rate section on the right hand of **Table 9** presents the confidence level limit differences between the MC limit values and the limit values from the seven years of quarterly results. Therefore, the right-hand section considers all of the 78 firms not only 64.

Three companies are determined as outliers, which remains in 95% confidence level as 3 out of 64 is less than 5%. Therefore, the results show, that both distributions are giving valid results for the first quarter of 2008. Moreover, the calculated MC limits


**Table 9.** *Outliers and failure rate.*

#### *Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*

perform at least as well as the 95% level limit figures from actual quarterly results. Nevertheless, it does not mean that the model is valid. To test validity, the model needs to be back- and forward tested. Testing the validity is discussed afterwards.

Analyzing the number of firms ending up in the distressed zone using the confidence limits and above-mentioned distributions, gives a good estimate of how badly firms may do in terms of Altman's z-score.

**Table 10** shows the firms during the period [2001–2007] that have Altman's zscore less than 1.81. For calculated 95% and 99% confidence limit, the firms having score less than 1.81 range from 17 to 28 firms. There is a great difference between the 95% and 99% limit for manufacturing firms. Almost 60% more firms fall into distressed zone compared to 95% confidence level. For non-manufacturing firms, such difference is smaller. Again, the discrepancy is likely to come from the standard Altman's formula used for non-manufacturing firms. The comparison also unveils that the results for the normal and t-distribution are almost the same for each confidence level. It is known from the properties of the two distributions that the difference between them is not big for the 95% confidence level, but when the confidence level is getting bigger, the difference is expected to widen considerably. For the period and firms investigated, the results do not confirm it, which may imply that the variables were nevertheless following a distribution fairly similar to a normal distribution. However, definite conclusions can only be drawn when the model has been back- and forward tested.

All the results that the previous tables are based on can be found in Appendix B. To make these results easily accessible in one set, the outcomes are represented on a box plot diagram as seen in **Figure 6**. The *x* on the diagram represents the mean and the line within the box represents the median value, which is also the second quartile or 50th percentile. The lower end of the box is the first quadrant and the higher end of the box is the third quadrant. The T-shaped projections are the whisker lines that represent the highest *local maximum* and the lowest *local minimum*. All values outside the local maximum or minimum are outliers and marked by dots. The outliers are calculated by using John W. Tukey convention. It determines outliers as data points, which are further than 1.5 times the interquartile range from either end of the box [42]. The interquartile range is the length of the box, from quartile 1 to quartile 3. Tukey determined the length of the whiskers so that it would not be either too exclusive nor too inclusive and established that 1.5 times the interquartile range is a good compromise [42].

The box plots reveal how the statistical 95% and 99% confidence limit results are greatly more constrained than the calculated MC results. Especially it can be said for 99% confidence limit, when compared to MC 99% normal or 99% t-distribution box plots. The box plots also illustrate that the results of the 95% limit are closer to each other compared to the 99% limit, where the differences in the results are wider. The two 99% Monte Carlo box plots of the normal and t-distribution display that the t-distribution confidence limits have a somewhat wider and deeper negative range.


**Table 10.** *Confidence limits.*

**Figure 6.**

*Box plots of Altman's z-score results.*

Box plots also reveal that the means are close to the first quartile boundary, except the first quarter results of 2008, which has a mean higher than the median. It is explainable by the fact that the top quartile has more variation than the lower quartile, resulting in further higher values than lower values relative to the median value.

The diagram unveils also risks posed by the first four box plots compared to the Q1 2008 box plot. If the following quarters were extremely bad, the results of recent quarters may not have had enough impact for the trailing seven years of data to estimate the confidence limits correctly. Therefore, it is risky to rely on any other confidence limit than the 99% limit of Monte Carlo normal or t-distribution. The range of the two latter confidence limits is more appropriate considering the vulnerability of Altman's z-score values to potential sharp falls. On the other hand, there are trade-offs because the values of some z-scores tend to go extremely negative and may not appropriately express the economic limits of actual situations. Nevertheless, the deep negative values should be a good indication of how bad the situation can go with a specific firm.

The diagram displays how the lower bound risk metric provides a metric to look beyond Altman's z-score values and determine the real risk metric of bankruptcy the firm may possess. The range of values is much wider and the values have fluctuated considerably more when comparing the 99% confidence level results to the original first quarter results for 2008. An example of an individual firm brings

#### *Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*

more clarity as it is difficult to interpret the group result on a diagram to individual firms. The data for the following example was taken from the results in Appendix B. For instance, the IMI plc had the first quarter Altman's z-score result of 2008 as 3.21 and the 95% and 99% t-distribution confidence limit scores as 2.33 and 2.05 respectively. HomeServe plc had a little higher bankruptcy score for 2008, it was 3.95, but the 95% and 99% t-distribution confidence limit scores were far worse, 0.76 and 1.24 respectively. Although, it may seem that HomeServe plc had better bankruptcy score for 2008 and was better protected from insolvency, the lower bound risk metric indicated the opposite. It shows how the lower bound risk metric adds another necessary risk measure to Altman's z-score to interpret and consider the inherent risk in the results.

The diagram shows inconsistencies in the data of at least with some of the outlier firms. Whether there were mistakes in quarterly reports or the mistakes lay somewhere else, either way the data in these few cases do not appear to be trustworthy. In other instances, there appeared to be no apparent reason in the big deviations, when inspecting the variables and data used. In such situations, it is important to look at the financial statements and other fundamental values of the interested outlier firms to determine the source of inconsistencies.

### **7. Discussion**

This study did not identify any previous research that would have specifically tried to expand Altman's study by using confidence limits for Altman's z-score values. Considering that the limits are providing a very valuable information for a risk evaluation and prevention, this study deemed it necessary to fulfil this research gap. The study produced a lower bound risk metric for Altman's z-score and identified it to be a good gauge for the lower limit measure using a 99% confidence level. It provided satisfactory results for the tested period and for the sample used, but the model needs a more rigorous testing to modify and verify its performance.

#### **7.1 Evaluation**

In a bankruptcy literature, a research has tried to establish the best model for a bankruptcy prediction by using ever more complicated methods such as the highly computerized Artificial Neural Networking or kernel based learning models. Instead, this research examined the most popular bankruptcy model, Altman's zscore, in relation to the worst-case scenario. It examined the variability and correlation between variables with an aim to produce a simulation that repetitively calculates Altman's z-scores from which it is possible to find the worst-case scenario with a selected confidence level. Not only is this approach applicable for Altman's zscore model, but this methodology can be used similarly for almost any other bankruptcy model. The Value at Risk methodology, used throughout this study, is very flexible and applicable in a variety of situations.

The biggest drawbacks were not related to the methodological approach, but rather to the availability of data and resources to conduct the study in the most appropriate way possible. Some of the limitations faced during this study were purely related to the lack of resources. Having had necessary resources would have helped to improve and modify the system, resulting in less limitations and better outcomes. It would have given more credibility for the study. Some limitations,

such as the lack of data, was practically impossible to overcome. Although simulations help in situations of limited data, the simulations are as good as the quality of data. One option to overcome some of the data related issues, would be to choose a model that uses a data, which has a long historical record and which future outcomes are more predictable. Even then, the model depends on the selected sample size. The size and industry of companies, the culture and regulations of different countries. Has an impact on variables that would be difficult to measure with one standard model. Hence, the model developed in this study is most appropriate to use on FTSE250 index firms or on firms that have similar characteristics. As it was pointed out in the analysis of data, the model is best to use on manufacturing firms as the results may not describe as accurately the riskiness of service firms. For service firms, it could have been more accurate to use the modified z-score model specifically developed for non-manufacturing firms. Its distressed region is also defined as having z-score results of less than 1.1. This would have changed the obtained results, although not considerably.

The positive side of this model is that it is observing a range and it gives the lower quantile figure based on a confidence level. It is much more difficult to overestimate this figure compared to the standard Altman's z-score. The lower bound risk metric depends on the volatility of variables. Therefore, the firms, whose results from the risk metric are above the distressed zone limit of 1.81 can be considered as relatively safe option in terms of insolvency during times of economic distress. The results from the example introduced in finding display that HomeServe plc was more exposed to the negative economic and adverse business situations. It had a higher risk of insolvency if such situations would have had become true and had continued for an extended period. As can be seen from that example, Altman's z-score does not give a full and accurate overview. Therefore, the lower bound risk metric developed in this research, provides a valuable supplementary information for a well-informed decision making.

The outcomes in **Table 9** showed that there were no material differences between the results from a normal and a t-distribution used in Monte Carlo simulation. It indicates that the distribution does not have the expected fat tails, but it is somewhat closer to a normal distribution. For instance, it could also be a tdistribution with a different degrees of freedom parameter. Nevertheless, this could only be determined with back- and forward testing and with more refinements to the model.

The 99% MC confidence limits have lower and wider z-score range compared to the statistically calculated 99% normal confidence limits. The righthand section of **Table 9** showed that more than 70% of firms had lower z-score when MC limits were used compared to the 99% normal confidence limits. Even when the rest of the values, over 20%, were higher, the model did not perform worse. The left-hand section of **Table 9** showed that the number of outliers was the same, which indicates that the Monte Carlo method was following the distribution of variables more closely than the statistical limit method. It gives more confidence in using MC method. Even when the limits with MC method come very negative, there is reason behind it. It indicates a considerable risk and that the financials and the volatility of variables in observed firms need more investigation.

Whilst the outliers remained within the confidence limits as indicated in **Table 9**, the potential downside effect may not be included when using the 95% MC limits as can be deduced from **Table 10** and from the box plots presented by **Figure 6**. The 99% MC limits provide the confidence even in more adverse

*Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*

situations such as a financial crisis. Same requirement has been also applied by Basel Accords ([28], p.385).

Considering that the number of outliers in **Table 9** stayed within the specified confidence limits, we can reject the null hypothesis and accept the alternative hypothesis. The VaR methodological approach to create a lower bound risk metric and determine the lower limits for Altman's z-score has been demonstrated to be working within the specified boundaries.

#### **8. Conclusion**

The research analysed FTSE 250 companies with the aim of providing a lower bound risk metric for Altman's z-score. The time period examined was from 2001 to 2007 and Altman's z-score limits were estimated for the first quarter of 2008. Data was collected from quarterly reports. After all limitations and exclusions, 78 firms were analysed.

Essentially, the methods applied in this research were based on the Value at Risk methodology. The VaR methodology was used to generate a new risk metric that set a lower bound confidence limit for Altman's z-score bankruptcy model. The model used Monte Carlo simulation and correlation matrices to produce the new risk metric.

The results obtained were compared to statistical confidence limits and to Altman's z-scores from the first quarter of 2008. The number of outliers stayed within the selected confidence limits, which showed that the model does work in the specified limits. The first limitation was set by the chosen sample. The aim of the research was to focus on UK based firms and therefore the FTSE 250 index appeared to be the most appropriate in size, data availability and of interest to study. To actually verify the model, it must be back- and forward tested, neither of which was carried out in this research. It was suggested to use a 99% confidence limit for the model in order to include the potential adverse situations. The contribution of this paper is the new risk metric provides a measure of risk to Altman's zscore that was not considered before. It produces essential information on the quality of the z-score and helps to make more profound decisions.

The chosen risk horizon throughout this analysis has been one quarter, because this is the minimum period of a financial statement and provides most data points to carry out this analysis. There are possibilities to change the risk horizon. The obvious way is to replace a quarterly result with a half a year result or longer term and carry out the whole calculation process again. Another way is to convert the short-term risk horizon to a long-term, which is called scaling. Assuming that the variables are normally distributed, the values can be approximately scaled by the following equation

$$
\infty = \mu r + \sqrt{r}\sigma z \tag{17}
$$

where r is the multiplier to obtain required risk horizon ([28], p.22). Therefore, in the current analysis, all variable values *x* need to be recalculated. It also means the recalculation of all the subsequent values of Altman's z-scores and the confidence limit for each firm.

## **Appendix**

## **A. Examples of test for normality**

*Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*


## **B. Table of results**




*Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*


## **Author details**

Hardo Holpus<sup>1</sup> , Ahmad Alqatan<sup>2</sup> \* and Muhammad Arslan<sup>3</sup>


\*Address all correspondence to: aalqatan@aou.edu.kw

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*

### **References**

[1] Jorion P. *Value at risk: the new benchmark for managing financial risk*. New York: McGraw-Hill; 2001

[2] Xu X, Chen Y, Zheng H. The comparison of enterprise bankruptcy forecasting method. Journal Of Applied Statistics. 2011;**38**(2):301-308. DOI: 10.1080/02664760903406470

[3] Altman EI. Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy. Journal of Finance. 1968:189-209. DOI: 10.1111/j.1540-6261.1968.tb00843.x

[4] Beaver WH. Financial ratios predictors of failure. Journal of Accounting Research. 1966;**4**:71-111

[5] Fisher RA. The Use of Multiple Measurements in Taxonomic Problems. Annals of Eugenics. 1936;**7**:179. DOI: 10.1111/j.1469-1809.1936.tb02137.x

[6] Merton R. On the Pricing of Corporate Debt: The Risk Structure of Interest Rates. The Journal of Finance. 1974;**29**(2):449-470. DOI: 10.2307/ 2978814

[7] Ohlson J. Financial Ratios and the Probabilistic Prediction of Bankruptcy. Journal of Accounting Research. 1980;*18* (1):109-131. DOI: 10.2307/2490395

[8] Springate GLV. Predicting the possibility of failure in a Canadian firm (Unpublished master's thesis). Canada: Simon Fraser University; 1978

[9] Zmijewski ME. Methodological issues related to the estimation of financial distress prediction models. Journal of Accounting Research. 1984; **22**:59-86

[10] Chou C, Hsieh S, Qiu C. Hybrid genetic algorithm and fuzzy clustering for bankruptcy prediction. Applied Soft Computing. 2017. DOI: 10.1016/j. asoc.2017.03.014

[11] Coats PK, Fant LF. Recognizing Financial Distress Patterns Using a Neural Network Tool. The Journal Of The Financial Management Association. 1993;**22**(3):142-155

[12] Perez M. Artificial neural networks and bankruptcy forecasting: a state of the art. Neural Computing & Applications. 2006;**15**(2):154-163. DOI: 10.1007/s00521-005-0022-x

[13] Tam KY, Kiang MY. Managerial applications of neural networks: the case of bank failure predictions. *Management science*. 1992;*38*(7):926-947

[14] Barboza F, Kimura H, Altman E. Machine Learning Models and Bankruptcy Prediction. Expert Systems With Applications. 2017. DOI: 10.1016/j. eswa.2017.04.006

[15] Van Gestel T, Baesens B, Martens D. From linear to non-linear kernel based classifiers for bankruptcy prediction. Neurocomputing. 2010;**73**. DOI: 10.1016/j.neucom.2010.07.002

[16] Zhao D, Yu F, Huang C, Wei Y, Wang M, Chen H. An Effective Computational Model for Bankruptcy Prediction Using Kernel Extreme Learning Machine Approach. Computational Economics. 2017;**49**(2): 325-341. DOI: 10.1007/s10614-016- 9562-7

[17] Miller W. Comparing Models of Corporate Bankruptcy Prediction: Distance to Default vs. In: Z-Score. 2009 Retrieved 25 June 2017, from https://c orporate.morningstar.com/us/docume nts/MethodologyDocuments/Method ologyPapers/CompareModelsCorpBa nkruptcyPrediction.pdf

[18] Heine ML. Predicting Financial Distress Of Companies: Revisiting The Z-Score and Zeta models. (Unpublished paper. New York: Stern School of Business; 2000

[19] Hillegeist S, Keating E, Cram D, Lundstedt K. Assessing the Probability of Bankruptcy. Review Of Accounting Studies. 2004;**9**(1):5-34. DOI: 10.1023/ B:RAST.0000013627.90884.b7

[20] Balcaen S, Ooghe H. 35 years of studies on business failure: an overview of the classic statistical methodologies and their related problems. The British Accounting Review. 2006;**93**. DOI: 10.1016/j.bar.2005.09.001

[21] Appiah KO, Chizema A, Arthur J. Predicting corporate failure: a systematic literature review of methodological issues. International Journal Of Law & Management. 2015;**57** (5):461. DOI: 10.1108/IJLMA-04-2014- 0032

[22] Mousavi MM, Ouenniche J, Xu B. Performance evaluation of bankruptcy prediction models: An orientation-free super-efficiency DEA-based framework. International Review Of Financial Analysis. 2015;**75**. DOI: 10.1016/j. irfa.2015.01.006

[23] Charalambakis EC. On the Prediction of Corporate Financial Distress in the Light of the Financial Crisis: Empirical Evidence from Greek Listed Firms. International Journal Of The Economics Of Business. 2015;**22**(3): 407-428

[24] Agarwal V, Taffler RJ. Comparing the performance of market-based and accounting-based bankruptcy prediction models. Journal Of Banking And Finance. 2008. DOI: 10.1016/j. jbankfin.2007.07.014

[25] Baldwin J, Glezen GW. Bankruptcy Prediction Using Quarterly Financial Statement Data. Journal Of Accounting, Auditing & Finance. 1992;**7**(3):269-285

[26] Halilbegovic S, Vehabovic M. Backtesting Value at Risk Forecast: the Case of Kupiec Pof-Test. European Journal Of Economic Studies. 2016;**17**

(3):393-404. DOI: 10.13187/ es.2016.17.393

[27] Cheung YH, Powell RJ. Anybody can do Value at Risk: A Teaching Study using Parametric Computation and Monte Carlo Simulation. Australasian Accounting, Business and Finance Journal. 2012;**6**(5):101-118

[28] Alexander C. *Value-at-risk models*. 1st ed. Chichester, England: Wiley; 2010

[29] Breuer T. Providing against the worst: Risk capital for worst case scenarios. Managerial Finance. 2006;**32** (9):716-730. DOI: 10.1108/ 03074350610681934

[30] Krause A. Exploring the Limitations of Value at Risk: How Good Is It in Practice? The Journal Of Risk Finance. 2003;(2):19

[31] Lambadiaris G, Papadopoulou L, Skiadopoulos G, Zoulis Y. VAR: history or simulation? Risk. 2003;**16**(9):122-127

[32] Kupiec P. Techniques for Verifying the Accuracy of Risk Management Models. Journal of Derivatives. 1995;**3**: 73-84

[33] Nieto MR, Ruiz E. Review: Frontiers in VaR forecasting and backtesting. International Journal Of Forecasting. 2016;**501**. DOI: 10.1016/j. ijforecast.2015.08.003

[34] S&P Capital IQ. (2017). FTSE 250 Index Financial Data. Retrieved 1 July 2017, from S&P Capital IQ database.

[35] Nallareddy, S., Pozen, R., and Rajgopal, S. (2017). Consequences of Mandatory Quarterly Reporting: The U. K. Experience. *Columbia Business School Research*, 17(33).

[36] Marathe R, Ryan SM. On The Validity of The Geometric Brownian Motion Assumption. The Engineering *Investigating the Viability of Applying a Lower Bound Risk Metric for Altman's z-Score DOI: http://dx.doi.org/10.5772/intechopen.97433*

Economist. 2005;**50**(2):159-192. DOI: 10.1080/00137910590949904

[37] McCrary, S. (2015). Implementing a Monte Carlo simulation: Correlation, skew, and kurtosis. Retrieved 29 June 2017, from http://www.thinkbrg.com/ media/publication/687\_McCrary\_Monte Carlo\_Whitepaper\_20150923\_WEB.pdf

[38] Moore T. Generating Multivariate Normal Pseudo Random Data. Teaching Statistics. 2001;**23**(1):8-10

[39] Hunt N. Generating Multivariate Normal Data in Excel. Teaching Statistics. 2001;**23**(2):58-59

[40] Parramore K. On Simulating Realizations of Correlated Random Variables. Teaching Statistics. 2000;**22** (2):61-63

[41] Villa, C., & Rubio, F. J. (2017). Objective priors for the number of degrees of freedom of a multivariate t distribution and the t-copula. Retrieved 20 July 2017, from https://arxiv.org/abs/ 1701.05638

[42] Krzywinski M, Altman N. Points of Significance: Visualizing samples with box plots. Nature Methods. 2014;**11**(2): 119-120. DOI: 10.1038/nmeth.2813

## Section 7
