6.4.3 Characterization

The process of characterization is to attribute different pollutants in the same category to the same indicator. For example, climate change includes not only carbon dioxide (CO2) and methane (CH4) but also hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), sulfur hexafluoride (SF6), etc. Through the results of natural science research, the global warming capacity of various greenhouse gases over a certain period of time (generally 100 years) is compared with carbon dioxide, thereby converting them into a certain amount of equivalent carbon dioxide, and summing up, the equivalent carbon dioxide emission (CO2e) is used to evaluate the global warming potential [51]. This convert is to multiply greenhouse gas emissions by a parameter that characterizes global warming to get equivalent CO2 emissions, and this parameter is called impact factors (IF). For other environmental impact categories, of course, there are corresponding evaluation indicators and impact factors.

Some commonly used characteristic units are given in Figure 5, such as evaluating acidification potential by equivalent sulfur dioxide emissions and ozone hole potential by equivalent Freon emissions. It is noteworthy that the characteristic unit is not exclusive, some models evaluate the acidification threat by equivalent nitrogen dioxide emissions, and there are corresponding pollutant impact factors which are different from those used in equivalent sulfur dioxide emissions. This is also unmistakable and feasible, and this is the biggest difference between LCA impact assessment models.

#### 6.4.4 Quantification

The process of quantification is the process of data processing of the equivalent indicators of each category. There are two common methods for this process: normalization methods and standardized methods. These two methods are essentially a linear transformation that transforms the data into more easily understood values

Integrated Life Cycle Economic and Environmental Impact Assessment for Transportation… DOI: http://dx.doi.org/10.5772/intechopen.86854

and improves the expressiveness of the evaluation results. The difference between the two is that normalization will classify the evaluation results into the interval [0, 1] and the standardized results are related to the overall distribution of the data. This stage is an optional stage in the impact assessment, and the results of the evaluation vary depending on the evaluator and the evaluation method.

#### 6.5 Result interpretation

#### 6.5.1 Data uncertainty analysis

When various uncontrollable external factors change, the evaluation plan and conclusion may be affected; this evaluation method is called uncertainty analysis, which is a commonly used method in decision analysis. Through this analysis, the impact of uncertainty factors on the evaluation results can be clarified and minimized, and the resistance of the evaluation conclusions to certain unforeseen risks can be predicted, thereby verifying the reliability and stability of the scheme.

Knowledge, experience, information, and judgment of future decision-making are required in uncertainty analysis. The commonly used methods are: (1) The profit and loss value of the scheme, that is, to calculate the different benefits caused by various factors, and the scheme with the largest return is the optimal scheme. (2) The regret value of the calculation scheme. Calculate the difference between the return value and the maximum return value of the scheme adopted due to the misjudgment of uncertain factors, and the scheme with the smallest regret value is the best scheme. (3) The expected value. By using probability to calculate the standard value of the scheme comparison, the scheme with the best expected value is the best scheme. (4) Consider the criteria of decision-making without deviating from the rules [52]. To sum it up, uncertainty analysis can be divided into breakeven analysis, sensitivity analysis, probability analysis, and criteria analysis.
