**2. Uncertainty in LCA**

Uncertainty is a pervasive topic in LCA and can be defined in various ways [8]. In [9], definition of uncertainty given by [10] is quoted: "Uncertainty is defined as incomplete or imprecise knowledge, which can arise from uncertainty in the data regarding the system, the choice of models used to calculate emissions and the choice of scenarios with which to define system boundaries, respectively."

LCA is an analytical tool which needs intensive data: methodological choices, initial assumptions, and degree of data uncertainty have a profound effect on validity of LCA results [11], and existing quantitative uncertainty methods in LCA require also a huge amount of accurate data [12]. Problem of aleatory uncertainty or "lack of knowledge" and epistemic uncertainty or "variability" [13] is discussed in [14], when it is highlighted that quantification for the aleatory uncertainty is usually performed using the MC. Detailed description of the combination of sources of uncertainty (parameter, model, and scenario uncertainties) and methods (deterministic, probabilistic, possibilistic, and simple methods) to address them is presented by [15]. According to the [16] discussed municipal solid waste incineration model, it is suggested that uncertainty is due to data gap or inaccurate data.

The LCI analysis involved the collection and calculation of data and procedures to quantify the relevant input and output of the product system. Very often large amounts of data required for LCI [17, 18] are affected by uncertainty [19, 20]. The main sources of uncertainty presented in [21, 22] is quoted in [23, 24]. With respect to parameter uncertainty, the common practice in LCA consists in representing uncertainty parameters by single probability distributions, e.g., a normal distribution is characterized by an average and a standard deviation [25, 26], while in [8], uncertainty is defined as geometric standard deviation of intermediate and elementary exchanges at the unit process level. To obtain a result, different statistical methods can be applied. The most well-known sampling method is MC simulation (e.g., [18–26]) easily applied to LCA [27], while a most sophisticated method is the Latin hypercube (LH) method, where the sampling strategy is not entirely random but utilizes stratified probability distributions [27]. MC simulation is also recommended in the IPCC 2006 Guidelines [28, 29]. Most software for LCA is by now able to deal with uncertainties, in most cases on the basis of MC simulation [27–29]. LH sampling performs better that random sampling when the output is dominated by a few components of the input factor and better that random sampling for estimating the mean and the population distribution function [30]. Moreover, LH sampling (data compression techniques) can reduce computer time [15] and may reduce the required number of simulation [27]. It is important that a sufficient number of replications be used in a simulation. The number of replications of simulation affects the quality of the results. In general, the higher the number of replications, the more accurate will be the characterization of the output distribution and estimates of its parameters, such as the mean [19, 20]. According to [31], it is suggested that statistical accuracy of the simulation increases with an increased number of trials.

As pointed out in, the number of runs will vary from problem to problem, at least 1000 runs (see *Introduction to LCA with SimaPro* [32]) to thousands [23, 24, 30, 33]. In discussion of stochastic flow shop scheduling metaheuristic model for vessel transits in Panama Canal that used 200 runs in MC simulation model, which stated that the change in the 95% confidence interval width for makespan was negligible, is presented.

The problem of number of runs in MC-based approaches was also considered by [34], who analyzed the fuzzy uncertainty propagation using matrix-based LCI and proposed the number of runs between 100 and 10,000. According to [35] in the analysis of the IBM, daily trading volume stocks used a Poisson distribution via the MC simulations based on 1000 repetitions. Also [36] applied 1000 iterations to estimate the uncertainties of life cycle impact assessment (LCIA) results introduced by the statistical variability or temporal, geographical, or technological gaps in the LCI data. In the same work to estimate the combined uncertainty for IPCC-derived greenhouse gas inventory, the MC simulation with 5000 iterations was used [36]. In [36], when probabilistic scenarios are analyzed, using Microsoft Excel with CB for MC method for each scenario, uncertainty analysis involved 20,000 MC simulations. Finally, [37] presented relative results between compact neighborhoods cells in Mexico City involved 100,000 MC simulation.
