**2. Probability modelling and Monte Carlo simulation**

Business uncertainty is when the outcome of a strategic business decision is unclear, often due to a lack of information or knowledge about the business environment. The conventional approach to strategic business decisions assumes that executives can predict the future of any business accurately enough to choose a clear strategic direction for it by applying standard deterministic spreadsheet models. Such deterministic models allow you to calculate a future event precisely, without the involvement of randomness. However, deterministic models do not account for the fact that the business environments are complex and constantly changing. For instance, deterministic models assume that known average rates with no random deviations can be applied to the broader population. For example, if 10,000 businesses each have a 95% chance of surviving another year, we can be reasonably confident that 9500 businesses will indeed survive.

In contrast, probability models using Monte Carlo simulation allow for random variations due to uncertainties in the parameters or limited sample size for which it may not be reasonable to apply average rates. For example, consider a sample of 10,000 businesses in the U.K. with each having a probability of 0.7 of surviving another year. The average number of companies that may survive another year will be 10,000 0.7 = 7000. However, due to the limited sample size of the population, there will be random variations, and a probabilistic description of the population at the end of the year would be preferable. We would use a probability distribution to describe

#### *Perspective Chapter: Application of Monte Carlo Methods in Strategic Business Decisions DOI: http://dx.doi.org/10.5772/intechopen.106201*

the population in this case. The probability distribution provides the probabilities of having zero survivors, one survivor, two survivors, and more survivors at the end of the year. A probability distribution is given for the number of businesses that will survive another year and not just an average number.

When the future is genuinely uncertain, a deterministic approach is at best marginally helpful and very dangerous, given that underestimating uncertainty can lead to strategies that neither defend a company against the threats nor take advantage of the opportunities that higher levels of uncertainty provide. Major analytical tools such as probabilistic modelling using Monte Carlo simulation and game theory, amongst others, offer enormous opportunities for business executives working in industries facing significant uncertainties. What follows is a discussion of probabilistic modelling using Monte Carlo simulation and an analysis of the probability distributions used in this chapter.

We can categorize data for modelling business decisions into **input data** (explanatory data) and **output data** (predicted/outcome data). A major aspect of dealing with business uncertainty is using quantitative methods to model uncertainty. For example, a firm's net profit 1 year from today is uncertain. We know that a firm's net profit 1 year from today is a function of many uncertain input variables, including the demand for the company's goods/services, the cost of goods sold and tax rate, amongst others. Some of these input variables are outside the control of the decision-maker. Probability models can be used to propagate uncertainty in the input variables. Probability modelling uses probability distributions of input assumptions to calculate the probability distribution for chosen output metrics/summary measures [4]. It is important to remember that uncertainty in business decisions is unavoidable because real-world situations cannot be perfectly measured, modelled or predicted. As a result, business decision-makers face significantly complex problems compounded by varying levels of uncertainty. If uncertainty has not been well acknowledged, potential complications arise in the process of decision-making. Decision-makers often want to know the impact of certain business decisions on their bottom line, how much the trade-off between alternative actions reduces their potential profit and the possible consequences of decisions.

Monte Carlo simulation allows executives to see all the possible outcomes of their decisions and assess the impact of risk, thus, allowing for better decision-making under uncertainty. Probability distributions are a realistic way of describing uncertainty in variables. Monte Carlo methods are defined as statistical approaches to provide approximate solutions to complex optimization or simulation problems by using random sequences of numbers [5–7]. The Monte Carlo method performs analysis by building models of possible results via substituting a range of values—a probability distribution—for any factor with inherent uncertainty. The algorithm then calculates results repeatedly, each time using a different set of random values from the probability functions. In other words, values are sampled at random during a Monte Carlo simulation from the input probability distributions and each set of samples is called an iteration. The resulting outcome from that sample is recorded. Each iteration produces different values for the input assumptions fed into the model. Monte Carlo simulation could involve thousands or tens of thousands of recalculations before producing a probability distribution of possible outcomes.

Using probability distributions allow variables to have different probabilities of different outcomes occurring. Probabilistic modelling using the Monte Carlo method tells you what could happen and how likely it is to happen. What follows is a brief discussion of the common probability distributions used in decision analytics.

## **2.1 Common probability distributions in business decision making**

Organizations and governments use mathematical models, optimization algorithms and other tools in their decision-making to achieve strategic organizational goals. However, essential variables such as future sales, future material costs, etc., are often uncertain. In reality, a plan that promises high performance, in theory, can go wrong when assumptions change or prove false, or we encounter unanticipated events such as regional wars or events like the COVID-19 global pandemics. Probabilistic modelling using Monte Carlo simulation methods can assist decision-makers in making strategic business decisions in the face of uncertainties. Probability models represent events as probabilities rather than certainties, using probability distributions to increase expected returns or reduce downside risk. The main goals of this section are to present a brief review of distribution fitting and a discussion of the following probability distributions - *Normal*, *Lognorma*l, *Bernoulli* and *Triangular distributions*. We start with a discussion of probability distribution and then discuss each probability distribution mentioned above in turn below.

#### *2.1.1 Review of distribution fitting*

Distribution fitting is used to select a statistical distribution that best fits a data set. Examples of statistical distributions include the normal, lognormal, Bernoulli and triangular distributions. A distribution characterizes a variable when the distribution conditions match those of the variable. The maximum likelihood estimation (MLE) method estimates the distribution's parameters from a data set. Once the estimation is complete, you use goodness of fit techniques to help determine which distribution fits your data best. Distributions are defined by parameters. These parameters define the distribution. There are four parameters used in distribution fitting: *location*, *scale*, *shape* and *threshold*. The *location parameter* of distribution indicates where the distribution lies along the x-axis (the horizontal axis). The *scale parameter* of distribution determines how much spread there is in the distribution. The larger the scale parameter, the more spread there is in the distribution. The smaller the scale parameter, the less spread there is in the distribution. On the other hand, the *shape parameter* allows the distribution to take different shapes. The larger the shape parameter, the more the distribution tends to be skewed to the left. The smaller the shape parameter, the more the distribution tends to be skewed to the right. At the same time, the *threshold parameter* defines the minimum value of the distribution along the x-axis. The distribution cannot have any values below this threshold.

#### *2.1.2 The normal distribution*

The normal distribution is the single most important distribution in statistics. The normal distribution is a continuous distribution characterized by its mean and standard deviation. Therefore, we say that the normal distribution is a *two-parameter* distribution. The normal curve shifts to the right or left by changing the mean. If the standard deviation is changed, the curve spreads out more or less. The possible values of the normal distribution range over the entire number line - from minus infinity (i.e., ∞) to plus infinity (i.e., + ∞). Random variables from the normal distribution form the foundation of probabilistic modelling. This distribution is also known as the Bell Curve, and it occurs naturally in many circumstances. For example, the normal distribution is seen in tests like the general certificate of secondary education (GCSE)

*Perspective Chapter: Application of Monte Carlo Methods in Strategic Business Decisions DOI: http://dx.doi.org/10.5772/intechopen.106201*

or National 5 examination in the U.K., where most students will score the average grade (C). In contrast, smaller numbers of students will score a B or D. A much smaller percentage of students will score an F or an A. The score grades create a distribution that looks like a bell. The bell curve is symmetrical in that half of the data will fall to the left of the mean, and half will fall to the right.

Eq. (1) is the formula for a normal density function.

$$f(\chi) = \frac{1}{\sqrt{2\pi\sigma}} e^{-(\chi-\mu)^2/\left(2\sigma^2\right)} for -\infty < \infty < +\infty. \tag{1}$$

where *μ* and *σ* are the mean and standard deviation of the distribution and are fixed constants. *e<sup>x</sup>* is the exponential function. The mean can take both positive and negative values, including zero, while the standard deviation can only take positive values.

The standard deviation controls the spread of the distribution. If the standard deviation is small, the data will cluster tightly around the mean, and the normal distribution will be taller. A bigger standard deviation indicates more dispersion away from the mean, and the normal distribution will be flatter and wider. Below are the properties of a standard normal distribution.


#### *2.1.3 Lognormal distribution*

The lognormal distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed [8, 9]. If the random variable *X* is log-normally distributed, then *Y* ¼ *ln X*ð Þ has a normal distribution. A random variable which is log-normally distributed takes only positive values. The probability density function for a lognormal distribution is defined by two parameters, namely the mean<sup>1</sup> *μ* and the standard deviation *σ*:

$$N(\ln \left( \infty \right); \mu, \sigma) = \frac{1}{\sqrt[\epsilon]{2\pi}} \exp \left[ \frac{(\ln \left( \infty \right) - \mu)^2}{2\sigma^2} \right], \kappa > 0 \tag{2}$$

The shape of the lognormal distribution is defined by three parameters:*σ* is the **shape** parameter and is known as the standard deviation for the lognormal distribution. The shape parameter affects the general shape of the distribution and can be

<sup>1</sup> These two parameters should not be mistaken for the mean or standard deviation from a normal distribution. When the data is transformed using natural logarithms, the mean is the mean of the transformed data, and the standard deviation is the standard deviation of the transformed data.

calculated from historical data. The shape parameter does not change the location or height of the graph; it affects the overall shape, while the **location** parameter *μ* tells you where on the x-axis the graph is located. The probability density function for a lognormal distribution is defined by two parameters, namely the mean and the standard deviation. Lognormal distributions can model growth rates that frequently occur in biology and financial areas. It also models time to failure in reliability studies. The lognormal distribution is widely used in situations where values are positively skewed, for example, in financial analysis for security valuation or in real estate for property valuation, and where values cannot fall below zero. Stock prices are usually positively skewed rather than normally (symmetrically) distributed. Stock prices exhibit this trend because they cannot fall below the lower limit of zero but might increase to any price without limit. Therefore, the lognormal distribution curve can be used to identify the compound return that the stock can expect to achieve over some time. Similarly, real estate prices illustrate positive skewness and are log-normally distributed as property values cannot become negative.

### *2.1.4 Bernoulli distribution*

A Bernoulli distribution is a discrete probability distribution for a Bernoulli trial — a random experiment that has only two outcomes (a 'Success' or a 'Failure'). The two outcomes are labelled by *n* ¼ 0 and *n* ¼ 1 in which *n* ¼ 1 ('success') occurs with probability *p* and *n* ¼ 0 ('failure') occurs with probability *q* � 1 � *p*, where 0 <*p* <1. The probability density function for a Bernoulli distribution is given as:

$$P(n) = \begin{cases} 1 - p & \text{for} \quad n = 0\\ p & \text{for} \quad n = 1 \end{cases} \tag{3}$$

The distribution of heads and tails in coin tossing is an example of a Bernoulli distribution with

$$p = q = \text{1/2}.$$
