*2.1.5 Triangular distribution*

The triangular probability distribution is a continuous distribution that has a probability density function shaped like a triangle. The triangular distribution is defined by three parameters, namely: minimum value (Min), most likely value (Likely) and maximum value (Max). This distribution is practical in real-world applications because we can often estimate the minimum, most likely and maximum value that a random variable will take. So, we can often model the behaviour of random variables by using a triangular distribution with the knowledge of these three values. Essentially, we can use this distribution when we only have limited information about distribution but can estimate the upper and lower bounds, as well as the most likely value. The three conditions underlying the triangular distribution are:

1.The minimum number of items is fixed.

2.The maximum number of items is fixed.

*Perspective Chapter: Application of Monte Carlo Methods in Strategic Business Decisions DOI: http://dx.doi.org/10.5772/intechopen.106201*

3.The most likely number of items falls between the minimum and maximum values, forming a triangular-shaped distribution.

Values near the minimum and maximum are less likely to occur than those near the most likely value. The equation for the triangular distribution is given below:

$$f(\mathbf{x}) = \left\{ \begin{array}{l} 2(\mathbf{x} - \text{Min}) \\ \hline (\text{Max} - \text{Min})(\text{Likely} - \text{Min}) \\\\ 2(\text{Max} - \mathbf{x}) \\ \hline (\text{Max} - \text{Min})(\text{Max} - \text{Likely}) \end{array} \right\} \tag{4}$$

## **3. Dealing with uncertainty using probabilistic models**

Uncertainty in decision analytics emanates from a lack of knowledge about the system being modelled, and it consists of random events or variables. For example, addressing questions such as 'what are the average future demands for our products?' and 'Should we invest in a capital project or not?' The most common type of uncertainty is uncertainty due to randomness. Uncertainty can be qualitative or quantitative. Qualitative uncertainty may be due to a lack of knowledge about the factors that affect demand. In contrast, quantitative uncertainty may come from a lack of precise knowledge of a model parameter or a lack of confidence that the mathematical model is a correct formulation of the problem. Uncertainty can impact our decisions and actions in desirable as well as undesirable ways. Uncertainty can be reduced by collecting more information or data (i.e., quantitative methods). One of the most commonly used quantitative methods to address uncertainty is the probabilistic modelling using the Monte Carlo method.

Most business decisions are based on a forecast of future variables. These variables could be net present value (NPV), net profit, demand for a product, etc. The future is uncertain. To provide a decision-maker with helpful information, you need to generate a comprehensive range of potential outcomes and their relative likelihoods to make the best possible decisions. Our aim in decision analytics is to reduce uncertainty in our business decisions by envisioning possible scenarios and making forecasts based on what is considered probable within a range of probabilities. All probabilistic models have the following in common:


Selecting the correct probability distributions for the input variables is essential to maximize your results' confidence. The uncertain input probability distributions should be as realistic as possible. Remember that each distribution has distinctive ranges of possible sampled values and associated probabilities/likelihoods. Therefore, choosing the wrong distribution will create the wrong simulation data. A natural question at this point would be how do we know the 'right' probability distributions

for our variables? Unfortunately, this is a challenging question, a complete discussion of which is beyond the scope of this chapter. However, some guidelines will enable you to create reasonable models. We will provide a brief discussion of each in turn below.

**Discrete or Continuous Data:** Probability distributions describe the dispersion of the values of a random variable. Therefore, the type of a variable determines the type of probability distribution. The distribution for a single random variable is divided into discrete and continuous distributions. When identifying the 'right' probability distribution for your dataset, the first question is to examine whether the variable or quantity is discrete or continuous. A *discrete* quantity has a finite or countable number of possible values — for example, the gender of a person or the country of a person's birth. A *continuous* quantity can take on any number in the natural number line and has infinitely many possible values within a specified range. An example is the household incomes of Africans living in Scotland. Discrete probability distributions or probability mass functions are used for discrete variables, and probability density functions are used for continuous variables. For discrete probability distribution functions, each possible value has a non-zero likelihood.

**Is the Variable Bounded or Unbounded**? The second way of identifying a probability distribution that fits your dataset is to know if the continuous variable is bounded - that is, does it have a minimum and maximum value? Some continuous variables have exact lower bounds. For example, the price of a stock on a particular trading day cannot be less than zero. Some quantities also have exact upper bounds. For example, the percentage of a population exposed to the SARS-COV-2 virus (COVID-19) cannot be greater than 100%. Most real-world variables have de facto bounds - that is, it is plausible to assert that there is zero probability that the quantity would be smaller than some lower bound or larger than some upper bound, even though there is no precise way to determine the bound.

The discussions so far relate to where you have historical datasets. Historical data is often a reasonable indicator of the distribution of future outcomes for the input variable, both in terms of the general shape and parameter estimates. However, it is important to note that there is always an implicit assumption that the historical data is an 'accurate' representation of the future. But historical data has some possible flaws that need to be considered. For instance, is the data genuinely representative of the potential future, that is, how similar will the future conditions be to those in the past? Second, what is the sample period? - Does the data only go back over a short period. The sample period is vital because certain observations could be over or underrepresented?

**Theory and Subject Matter Knowledge**: It is not uncommon to encounter situations in practice where there is no historical data. A suitable process has to be followed to derive reasonable probability distributions and parameters in such circumstances. We consider a method that can be used to choose a probability distribution in the absence of historical data. A mathematical theory or logic will determine the correct distribution in most situations. For instance, a lognormal distribution is commonly used to describe distributions of financial assets such as share prices in the literature. This is because asset prices cannot be negative [10, 11]. Caution must be applied when using theory to choose the proper probability distribution that fits a dataset. There may be assumptions in theory that may not be valid in your situation. Examples include using a Binomial to model the sum of several identical and independent Bernoulli processes.
