**1. Introduction and Motivation**

Forecasting is critical to successful execution of an organization's operational and strategic functions such as for delivery of a cost effective and efficient supply chain [1]. The complex and dynamic organizational environment that defines much of today's forecasting function is of‐ ten supported by a range of technology solutions, or forecasting decision support systems (FDSS). Typically, FDSS integrate managerial judgment, quantitative methods, and databases to aid the forecaster in accessing, organizing, and analyzing forecasting related data and judg‐ ments [1-2]. Forecasting task complexity can negatively impact forecast reliability, accuracy, and performance [3-4]. Specifically, it can influence two elements of forecaster behavior – de‐ riving forecasts and judgmental adjustment of these forecasts [5]. In executing these functions, forecasters may utilize different heuristics for complex series as opposed to simple ones in or‐ der to mitigate cognitive demands [6-7]. Because selection and execution of these heuristics can be influenced by forecaster experience and knowledge-base, integrating time series complexi‐ ty into Forecasting Support Systems (FSS) design can bring greater objectivity to forecast gen‐ eration, while simultaneously providing meaningful guidance to forecasters [1].

Advances in design and use of FDSS, however, have been slow to come because of the fol‐ lowing range of problems that limit their usefulness in the forecasting domain. Firstly, FDSS are expensive to create, operationalize, and calibrate and therefore, require significant or‐ ganizational investment. Second, and most significantly, forecasts generated by such expen‐ sive FSS are often subjected to judgmental adjustments. Such adjustments may be driven by forecaster confidence, or lack thereof, in FDSS capabilities as well as forecaster's sense of ownership once they make judgmental adjustments as opposed to just accepting outputs of

© 2012 Adya and Lusk; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2012 Adya and Lusk; licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

a forecasting model. Third, forecaster confidence in FDSS and its outcomes is influenced by numerous system abilities such as strength of and confidence in explanations provided about forecast creation [8], information presentation [9], data about the systems past success rate [10], support from analogical forecasting tasks [11], and ability to decompose the fore‐ casting problem [12] to mention a few. Lastly, the functionality and processes underlying FDSS are sometimes difficult to align with the experiential thinking of forecasters [11], i.e. If such support systems adaptively support complex and simple tasks according to task de‐ mands, forecasters may be less tempted to make judgmental adjustments [11, 13-14].

technology. Technologies are defined as any set of tools, such as FDSS, required for execut‐ ing these tasks. The fit between tasks and technologies, then, refers to the "degree to which a technology assists an individual in performing his or her portfolio of tasks" [20]. In the usual case, TTF theory is implemented at two levels: (a) an organizational level that examines the presence of data, processes, and high-level system features (e.g. system reliability) to fulfill the broad needs of a decision domain, and (b) a context level that examines the presence of system features specific to a decision context e.g. system capabilities for time series forecast‐ ing or group decision making. Studies in both contexts develop TTF concepts from three perspectives: identification of (a) a task profile [15] i.e. tasks specific to the domain of study; (b) technology features or needs specific to a task profile; and (c) impact of TTF on individu‐

Designing Effective Forecasting Decision Support Systems: Aligning Task Complexity and Technology Support

http://dx.doi.org/10.5772/51255

175

Most TTF studies characterize tasks based on organizational level decision support needs such as information and data quality, access, procedures surrounding data access, and sys‐ tem reliability [19-21] characterized tasks in terms of non-routineness, defined as lack of an‐ alyzable search behavior, and interdependence with other organizational units. Later a dummy variable was added to capture managerial factors as a determinant of user evalua‐ tion of information system use [20]. Recently, an increasing number of studies have exam‐ ined tasks more contextually i.e. specific to the domain of study. Most commonly, these studies classified tasks according to their complexity [22-23]. Also characterized tasks were characterized on the basis of complexity and proposed a task classification that ranged from simple to fuzzy tasks [23]. In group decision making, [15] extended this classification to fur‐ ther define task complexity in group decision making. These studies define task complexity as having four dimensions: *outcome multiplicity* suggesting more than one desired outcome, *solution scheme multiplicity* suggesting more than one possible approach to achieving task goal, *conflicting interdependence* which can occur when adopting one solution scheme con‐ flicts with another, and *outcome uncertainty* defined as the extent of uncertainty regarding a desired outcome from a solution scheme. Others found that task support for virtually-linked teams often translated into those related to conflict management, motivation/confidence building, and affect management [17]. Other applications of TTF theory appear in mobilecommerce for the insurance industry [24], consumer participation in e-commerce [25]), and software maintenance and support [16]) among others. In the forecasting domain, surpris‐ ingly we found only one TTF study [19], that adapted organizational level factors to the

al performance. We discuss these findings in the next sections.

forecasting domain by examining needs related to forecasting procedures.

In the TTF framework, technological support has been characterized most often in terms of hardware, software, network capabilities, and features of the support system. Others, for in‐ stance, developed technology characteristics based upon input from an independent panel of IS experts [20-21]. These technology capabilities included relationships between DSS and its user, quality of support, timeliness of support, and reliability of system among others. Further, some relied on the same technology characteristics for considering adoption of Per‐ sonal Digital Assistants (PDAs) in the insurance industry [24]. In keeping with the underly‐ ing emphasis of TTF, i.e. fit between tasks and technologies, context-dependent studies have focused on specific capabilities for the domain of interest. For instance, others examined

The above discussions and supporting literature reaffirm that the level of agreement be‐ tween a task and the functionalities of the supporting technologies, i.e. the task-technology fit (TTF), can determine individual performance on tasks [15-19]. TTF studies suggest that the extent to which a technology supports individuals in performance of their portfolio of tasks can determine the degree of success in the execution of their tasks through both im‐ proved performance and better system utilization [15, 20]. In a sense then, TTF provides im‐ portant justifications for discretionary use of FDSS for simple and complex tasks. Under conditions where FDSS perform well empirically, it would likely be worth committing the time and resources to utilizing the FDSS. In contrast, where FDSS do not perform effectively or performs as effectively as human judgment, such commitment of time and resources to parameterize the FDSS may not be warranted. It has been asserted that certain functionali‐ ties of a technology are better suited for specific types of processes or tasks [17]. To this end, improved alignment between FDSS and FDSS-supported tasks, essentially better Forecast‐ ing Task-Technology Fit (FTTF), can mitigate the factors driving forecasters to make *ad hoc* adjustments of questionable validity done for rationalizing their worth [19].

In this study, we specifically examine the issue of forecasting task complexity and commen‐ surate FDSS support to provide a framework for FSS design and implementation using the TTF as an underlying motivator. In doing so, we achieve the following:


### **2. Literature Review**

#### **2.1. Task-Technology Fit**

TTF theory defines tasks as actions carried out by individuals to process inputs into outputs [20] and task profile as aspects of these tasks that might require users to rely on information technology. Technologies are defined as any set of tools, such as FDSS, required for execut‐ ing these tasks. The fit between tasks and technologies, then, refers to the "degree to which a technology assists an individual in performing his or her portfolio of tasks" [20]. In the usual case, TTF theory is implemented at two levels: (a) an organizational level that examines the presence of data, processes, and high-level system features (e.g. system reliability) to fulfill the broad needs of a decision domain, and (b) a context level that examines the presence of system features specific to a decision context e.g. system capabilities for time series forecast‐ ing or group decision making. Studies in both contexts develop TTF concepts from three perspectives: identification of (a) a task profile [15] i.e. tasks specific to the domain of study; (b) technology features or needs specific to a task profile; and (c) impact of TTF on individu‐ al performance. We discuss these findings in the next sections.

a forecasting model. Third, forecaster confidence in FDSS and its outcomes is influenced by numerous system abilities such as strength of and confidence in explanations provided about forecast creation [8], information presentation [9], data about the systems past success rate [10], support from analogical forecasting tasks [11], and ability to decompose the fore‐ casting problem [12] to mention a few. Lastly, the functionality and processes underlying FDSS are sometimes difficult to align with the experiential thinking of forecasters [11], i.e. If such support systems adaptively support complex and simple tasks according to task de‐

mands, forecasters may be less tempted to make judgmental adjustments [11, 13-14].

adjustments of questionable validity done for rationalizing their worth [19].

TTF as an underlying motivator. In doing so, we achieve the following:

ancing forecasting utility with efficiency given the costs of FDSS.

crete task profiles along a simple to complex continuum.

forecasting process.

174 Decision Support Systems

**2. Literature Review**

**2.1. Task-Technology Fit**

In this study, we specifically examine the issue of forecasting task complexity and commen‐ surate FDSS support to provide a framework for FSS design and implementation using the

**a.** Develop a characterization of complex and simple time series forecasting tasks. Herein, we rely on historical patterns and domain-based features of time series to develop dis‐

**b.** Review evidence from the empirical literature regarding forecasting task complexity and its implications for FDSS design and suggest designs that would benefit the

**c.** Develop an agenda for research and discuss practice-related issues with regard to bal‐

TTF theory defines tasks as actions carried out by individuals to process inputs into outputs [20] and task profile as aspects of these tasks that might require users to rely on information

The above discussions and supporting literature reaffirm that the level of agreement be‐ tween a task and the functionalities of the supporting technologies, i.e. the task-technology fit (TTF), can determine individual performance on tasks [15-19]. TTF studies suggest that the extent to which a technology supports individuals in performance of their portfolio of tasks can determine the degree of success in the execution of their tasks through both im‐ proved performance and better system utilization [15, 20]. In a sense then, TTF provides im‐ portant justifications for discretionary use of FDSS for simple and complex tasks. Under conditions where FDSS perform well empirically, it would likely be worth committing the time and resources to utilizing the FDSS. In contrast, where FDSS do not perform effectively or performs as effectively as human judgment, such commitment of time and resources to parameterize the FDSS may not be warranted. It has been asserted that certain functionali‐ ties of a technology are better suited for specific types of processes or tasks [17]. To this end, improved alignment between FDSS and FDSS-supported tasks, essentially better Forecast‐ ing Task-Technology Fit (FTTF), can mitigate the factors driving forecasters to make *ad hoc*

Most TTF studies characterize tasks based on organizational level decision support needs such as information and data quality, access, procedures surrounding data access, and sys‐ tem reliability [19-21] characterized tasks in terms of non-routineness, defined as lack of an‐ alyzable search behavior, and interdependence with other organizational units. Later a dummy variable was added to capture managerial factors as a determinant of user evalua‐ tion of information system use [20]. Recently, an increasing number of studies have exam‐ ined tasks more contextually i.e. specific to the domain of study. Most commonly, these studies classified tasks according to their complexity [22-23]. Also characterized tasks were characterized on the basis of complexity and proposed a task classification that ranged from simple to fuzzy tasks [23]. In group decision making, [15] extended this classification to fur‐ ther define task complexity in group decision making. These studies define task complexity as having four dimensions: *outcome multiplicity* suggesting more than one desired outcome, *solution scheme multiplicity* suggesting more than one possible approach to achieving task goal, *conflicting interdependence* which can occur when adopting one solution scheme con‐ flicts with another, and *outcome uncertainty* defined as the extent of uncertainty regarding a desired outcome from a solution scheme. Others found that task support for virtually-linked teams often translated into those related to conflict management, motivation/confidence building, and affect management [17]. Other applications of TTF theory appear in mobilecommerce for the insurance industry [24], consumer participation in e-commerce [25]), and software maintenance and support [16]) among others. In the forecasting domain, surpris‐ ingly we found only one TTF study [19], that adapted organizational level factors to the forecasting domain by examining needs related to forecasting procedures.

In the TTF framework, technological support has been characterized most often in terms of hardware, software, network capabilities, and features of the support system. Others, for in‐ stance, developed technology characteristics based upon input from an independent panel of IS experts [20-21]. These technology capabilities included relationships between DSS and its user, quality of support, timeliness of support, and reliability of system among others. Further, some relied on the same technology characteristics for considering adoption of Per‐ sonal Digital Assistants (PDAs) in the insurance industry [24]. In keeping with the underly‐ ing emphasis of TTF, i.e. fit between tasks and technologies, context-dependent studies have focused on specific capabilities for the domain of interest. For instance, others examined richness of communication media for resolving conflict between virtual teams and motivat‐ ing positive team work [17]. And some proposed that technologies supporting group deci‐ sion making must be capable of providing communications support and structuring group interaction processes along with supporting information processing needs of the group [15]. Finally, [19] leveraged the framework of [20] to the forecasting domain by focusing more significantly on system functionality and capabilities—specifically focusing on data, meth‐ ods, and forecasting effectiveness.

knowledge, historical data, causal forces acting upon the domain, as well as physical charac‐ teristics of the process producing the measured realizations that are to be forecast [12].

Designing Effective Forecasting Decision Support Systems: Aligning Task Complexity and Technology Support

http://dx.doi.org/10.5772/51255

177

Of interest in our study, however, is characterization of complexity in the context of the task i.e. the time series being forecast. Forecasting literature has provided some interpretations of com‐ plex time series. Most commonly, time series are defined as complex if the underlying process‐ es that generate them are such [27]. Chaotic time series, as opposed to "noise driven series", wherein observations drawn at different points in time follow a non-linear relationship with past observations of variables [28] have also been referred to as complex series. More recently, studies have characterized complexity in terms of time series composition. For instance, [26] describe complex time series as those where forecasters expect conflicting underlying causal forces, i.e. underlying forces will push the series in different directions in the forecast period. In essence, such series can be represented as a composite of multiple series where the challenge is to determine the overall effect or momentum of these multi-directional forces whose net effect

Most views presented above define complexity in terms of either specific patterns in historical data (e.g. variation or volatility) or underlying processes and influences (e.g. causal forces). This constrained view of time series complexity is surprising considering the taxonomy of time series features available in existing literature. Time series features often captured in empirical literature include stationarity or non-stationarity of series [29-31]. Stock market forecasting studies have often relied on capturing features like volatility persistence, leptokurtosis, and technical predictability of stock related series [32-33] classified time series in terms of three fea‐ tures – irrelevant early data (where the generating process has fundamentally and irrevocably changed such that it creates a misleading impression of the future), outliers, and functional form. Although focused on assessing judgmental confidence intervals, some have character‐ ized time series in terms of trend, level, seasonality, and noise [34]. These features provided 57% of the explanation for confidence intervals chosen by forecasters, suggesting that possibly

For purposes of this paper, we rely on a more extensive taxonomy of time series characteri‐ zations suggested by [35-37] to classify time series along the continuum of simple to com‐ plex tasks. Their classification is particularly relevant because it captures not merely a range of patterns in historical data but also underlying generating processes and judgmental ex‐ pectations about the future based on domain knowledge. Initially it was suggested that there were 18 such features [35] and these were later expanded to 28 by [36 - 37]. For pur‐ poses of this paper, we use a subset of these 28 features, particularly in the context of the four feature sets discussed earlier in this paragraph. These time series features are described

As mentioned previously, some have classified tasks along a simple to fuzzy (complex) con‐ tinuum such that system features could be developed in alignment with the task [15, 23]– in essence, TTF. Table 1 defines the key tasks types and their characteristics proposed in [15].

could be static i.e., no movement due to offsetting causal forces.

a finer breakdown of series characterizations may be worth consideration.

at length in Table A in the Appendix and in [35].

*2.2.2. Time Series Task Characterizations*

TTF studies have most commonly examined two outcomes of alignment between task and technology – system utilization and task performance. It was found that there was a sugges‐ tive relationship between TTF and system utilization and a strong positive connection with performance but mediated by utilization [20]. In contrast, it was suggested that TTF strongly predicted customer intention to purchase from an e-commerce site [25]. Also it was con‐ firmed that a strong relationship between performance on certain insurance tasks and use of mobile devices exists [24]. Finally others found that FSS characteristics, specifically forecast‐ ing procedures included in the FSS, to be positively related to perceptions of TTF which, in turn, positively related to forecasting performance. In general, these results confirm a strong association between performance and alignment between task needs and supporting tech‐ nologies [19]. This is a critical linkage for our study.

#### **2.2. Adapting TTF to the Forecasting Domain**

Time series extrapolation calls for quantitative methods to forecast the variable of interest with the assumption that behaviors from the past will continue in the future [2]. Time series forecasting is also found to improve with use of domain knowledge such as for series de‐ composition [26]. In essence, successful time series extrapolation relies upon recognizing idi‐ osyncratic aspects of the series as defined by patterns in the historical data as well as domain knowledge likely to emerge through unknown future generating processes. Consid‐ ering this, the implementation of time series task classifications using TTF theory is best ach‐ ieved by following the context-specific approach discussed earlier as this perspective emphasizes conditions that impact the contextual usefulness of FSS*.* In other words, if for the task that a forecaster must execute, the time series, serves as input into the FDSS, task characterizations may best emerge from the features of the series being processed. For pur‐ poses of this paper, we follow recommendations by [15, 23] to classify decision tasks, specifi‐ cally time series, along a simple to complex continuum.

#### *2.2.1. Complexity in Time Series Forecasting*

Complexity is inherent in the forecasting process [12]. While it can be argued that all one needs is a forecasting method and adequate data, a non-trivial view of the forecasting proc‐ ess suggested by [2] provides a more realistic perspective—that of decomposition. Each stage of the forecasting process entails coordinated action that requires use of judgment and analytical skills, inputs from multiple organizational units, as well as validation and integri‐ ty checks. When decomposed into its components, the forecasting process integrates domain knowledge, historical data, causal forces acting upon the domain, as well as physical charac‐ teristics of the process producing the measured realizations that are to be forecast [12].

Of interest in our study, however, is characterization of complexity in the context of the task i.e. the time series being forecast. Forecasting literature has provided some interpretations of com‐ plex time series. Most commonly, time series are defined as complex if the underlying process‐ es that generate them are such [27]. Chaotic time series, as opposed to "noise driven series", wherein observations drawn at different points in time follow a non-linear relationship with past observations of variables [28] have also been referred to as complex series. More recently, studies have characterized complexity in terms of time series composition. For instance, [26] describe complex time series as those where forecasters expect conflicting underlying causal forces, i.e. underlying forces will push the series in different directions in the forecast period. In essence, such series can be represented as a composite of multiple series where the challenge is to determine the overall effect or momentum of these multi-directional forces whose net effect could be static i.e., no movement due to offsetting causal forces.

Most views presented above define complexity in terms of either specific patterns in historical data (e.g. variation or volatility) or underlying processes and influences (e.g. causal forces). This constrained view of time series complexity is surprising considering the taxonomy of time series features available in existing literature. Time series features often captured in empirical literature include stationarity or non-stationarity of series [29-31]. Stock market forecasting studies have often relied on capturing features like volatility persistence, leptokurtosis, and technical predictability of stock related series [32-33] classified time series in terms of three fea‐ tures – irrelevant early data (where the generating process has fundamentally and irrevocably changed such that it creates a misleading impression of the future), outliers, and functional form. Although focused on assessing judgmental confidence intervals, some have character‐ ized time series in terms of trend, level, seasonality, and noise [34]. These features provided 57% of the explanation for confidence intervals chosen by forecasters, suggesting that possibly a finer breakdown of series characterizations may be worth consideration.

For purposes of this paper, we rely on a more extensive taxonomy of time series characteri‐ zations suggested by [35-37] to classify time series along the continuum of simple to com‐ plex tasks. Their classification is particularly relevant because it captures not merely a range of patterns in historical data but also underlying generating processes and judgmental ex‐ pectations about the future based on domain knowledge. Initially it was suggested that there were 18 such features [35] and these were later expanded to 28 by [36 - 37]. For pur‐ poses of this paper, we use a subset of these 28 features, particularly in the context of the four feature sets discussed earlier in this paragraph. These time series features are described at length in Table A in the Appendix and in [35].

#### *2.2.2. Time Series Task Characterizations*

richness of communication media for resolving conflict between virtual teams and motivat‐ ing positive team work [17]. And some proposed that technologies supporting group deci‐ sion making must be capable of providing communications support and structuring group interaction processes along with supporting information processing needs of the group [15]. Finally, [19] leveraged the framework of [20] to the forecasting domain by focusing more significantly on system functionality and capabilities—specifically focusing on data, meth‐

TTF studies have most commonly examined two outcomes of alignment between task and technology – system utilization and task performance. It was found that there was a sugges‐ tive relationship between TTF and system utilization and a strong positive connection with performance but mediated by utilization [20]. In contrast, it was suggested that TTF strongly predicted customer intention to purchase from an e-commerce site [25]. Also it was con‐ firmed that a strong relationship between performance on certain insurance tasks and use of mobile devices exists [24]. Finally others found that FSS characteristics, specifically forecast‐ ing procedures included in the FSS, to be positively related to perceptions of TTF which, in turn, positively related to forecasting performance. In general, these results confirm a strong association between performance and alignment between task needs and supporting tech‐

Time series extrapolation calls for quantitative methods to forecast the variable of interest with the assumption that behaviors from the past will continue in the future [2]. Time series forecasting is also found to improve with use of domain knowledge such as for series de‐ composition [26]. In essence, successful time series extrapolation relies upon recognizing idi‐ osyncratic aspects of the series as defined by patterns in the historical data as well as domain knowledge likely to emerge through unknown future generating processes. Consid‐ ering this, the implementation of time series task classifications using TTF theory is best ach‐ ieved by following the context-specific approach discussed earlier as this perspective emphasizes conditions that impact the contextual usefulness of FSS*.* In other words, if for the task that a forecaster must execute, the time series, serves as input into the FDSS, task characterizations may best emerge from the features of the series being processed. For pur‐ poses of this paper, we follow recommendations by [15, 23] to classify decision tasks, specifi‐

Complexity is inherent in the forecasting process [12]. While it can be argued that all one needs is a forecasting method and adequate data, a non-trivial view of the forecasting proc‐ ess suggested by [2] provides a more realistic perspective—that of decomposition. Each stage of the forecasting process entails coordinated action that requires use of judgment and analytical skills, inputs from multiple organizational units, as well as validation and integri‐ ty checks. When decomposed into its components, the forecasting process integrates domain

ods, and forecasting effectiveness.

176 Decision Support Systems

nologies [19]. This is a critical linkage for our study.

cally time series, along a simple to complex continuum.

*2.2.1. Complexity in Time Series Forecasting*

**2.2. Adapting TTF to the Forecasting Domain**

As mentioned previously, some have classified tasks along a simple to fuzzy (complex) con‐ tinuum such that system features could be developed in alignment with the task [15, 23]– in essence, TTF. Table 1 defines the key tasks types and their characteristics proposed in [15].


iations that are rare, unusual, and easily accounted for, thereby making them easier to forecast. Alternatively, domain knowledge is clear and non-conflicting. Such tasks are ex‐ pected to pose low cognitive load on the forecaster because confounding features and un‐

Designing Effective Forecasting Decision Support Systems: Aligning Task Complexity and Technology Support

Some instability features present

Medium to high variation about the trend Recent and basic trends may disagree

Significant trend Clear direction

Multiplicative series Multiple causal forces

At the other end of the continuum, by contrast, *complex forecasting tasks* are characterized by greater instability and uncertainty, do not demonstrate a clear generating structure, and may require "systematic" integration of a complex set of domain knowledge features that send conflicting signals [26]. For instance, forecasting monetary exchange rates is made chal‐ lenging by the low signal to noise ratio and the non-ergodic nature of the process caused by numerous undetermined underlying drivers [37]. Such series may pose greater cognitive de‐ mand on the forecasters who may find it difficult to isolate features such as trends and insta‐

*Moderately complex time series* will fall somewhere along the continuum (see Table 2). These tasks demonstrate some instability, variation about the trend may be higher than for simple

**Time Series Complex Time Series**

Many instability features present

http://dx.doi.org/10.5772/51255

179

High variation about the trend Recent and basic trends disagree

Significant trend Lack of clarity in direction due to confounding

Multiplicative series Unknown or inconsistent

causal forces

features

derlying processes are few and, consequently evident.

Instability

Uncertainty

recent trend

(up or down)

Structure

Coefficient of variation > 0.2 Difference between basic and

Significant basic trend Clear direction of trend

Presence of Domain Knowledge Causal forces Functional Form

Recent run not long Near a previous extreme Irrelevant early data Changing basic trend Suspicious pattern Outliers present Level discontinuities Unusual last observation

**Series Characteristics Simple Time Series Moderately Complex**

Few or no instability features present

Low variation about the trend Recent and basic trends agree

Insignificant trend No or low trend

Additive or multiplicative series Simple, consistent causal forces

**Table 2.** Time Series Task Classification Based on Series Features.

bilities and recognize underlying processes.

**Table 1.** Overview of Suggested Task Characterizations [15].

In Table 2, we offer a simplified adaptation of this taxonomy for the forecasting domain and classify series as simple, moderately complex, and complex. Time series features in [35], hereafter referred to as C&A, were used to develop the complexity taxonomy. The C&A's feature set is particularly relevant because it captures not merely a range of visible patterns in historical data that can influence judgmental forecasting processes (e.g. outliers, trends, and level discontinuities), but also recognizes underlying generating processes and domainbased expectations about the future. These features, described in Table A in the Appendix, could broadly be categorized into four clusters: (a) *uncertainty* defined by variation around the trend and directional inconsistencies between long and recent trend, (b) *instability* char‐ acterized by unusual time series patterns such as irrelevant early data, level discontinuities, outliers, and unusual observations, (c) *domain knowledge* defined as availability (or lack thereof) of useful domain knowledge and underlying functional form of the series i.e. multi‐ plicative or additive, and (d) *structure*, the presence or lack of a significant trend i.e., a per‐ ceptible signal. In forecasting literature, these features are the most comprehensive attempt to characterize series for use in an FSS, Rule-based Forecasting (RBF). RBF studies have ex‐ tensively validated these features, first in C&A on 126 time series, then in [38] across 458 time series, and finally on 3003 M3 competition series [36]. Considering this, we relied on a subset of these 28 features (see Table A - Appendix and C&A) for development and valida‐ tion of our taxonomy. The four feature clusters discussed above were used for classification as they have the potential of destabilizing a time series (C&A). Table 2 below provides a conceptual view of the details of this classification.

Using features from C&A, time series tasks can be classified into four categories with sim‐ ple and complex forecasting tasks being the two ends of this continuum. *Simple forecasting tasks* represent low instability and uncertainty, demonstrate relatively clear structure in their underlying trend patterns, and do not rely on significant domain knowledge to gen‐ erate useful forecasts. In most instances, demographic series such as percentage of male births tend to regress towards a known mean [37], have slow but steady trends, and var‐ iations that are rare, unusual, and easily accounted for, thereby making them easier to forecast. Alternatively, domain knowledge is clear and non-conflicting. Such tasks are ex‐ pected to pose low cognitive load on the forecaster because confounding features and un‐ derlying processes are few and, consequently evident.


**Table 2.** Time Series Task Classification Based on Series Features.

**Task Categorization Description**

**Table 1.** Overview of Suggested Task Characterizations [15].

conceptual view of the details of this classification.

Fuzzy Tasks

178 Decision Support Systems

Simple Tasks Low uncertainty; low conflicting interdependence; clear solution

best option from several available.

Problem Tasks Multiple solution schemes to a well-specified outcome. Needs involve finding optimal way of achieving the outcome.

Decision Tasks Finding solutions that meet needs of multiple conflicting outcomes. Selecting

Judgment Tasks Conflicting and probabilistic nature of task related information. Need to integrate diverse sources of information, predict future states.

In Table 2, we offer a simplified adaptation of this taxonomy for the forecasting domain and classify series as simple, moderately complex, and complex. Time series features in [35], hereafter referred to as C&A, were used to develop the complexity taxonomy. The C&A's feature set is particularly relevant because it captures not merely a range of visible patterns in historical data that can influence judgmental forecasting processes (e.g. outliers, trends, and level discontinuities), but also recognizes underlying generating processes and domainbased expectations about the future. These features, described in Table A in the Appendix, could broadly be categorized into four clusters: (a) *uncertainty* defined by variation around the trend and directional inconsistencies between long and recent trend, (b) *instability* char‐ acterized by unusual time series patterns such as irrelevant early data, level discontinuities, outliers, and unusual observations, (c) *domain knowledge* defined as availability (or lack thereof) of useful domain knowledge and underlying functional form of the series i.e. multi‐ plicative or additive, and (d) *structure*, the presence or lack of a significant trend i.e., a per‐ ceptible signal. In forecasting literature, these features are the most comprehensive attempt to characterize series for use in an FSS, Rule-based Forecasting (RBF). RBF studies have ex‐ tensively validated these features, first in C&A on 126 time series, then in [38] across 458 time series, and finally on 3003 M3 competition series [36]. Considering this, we relied on a subset of these 28 features (see Table A - Appendix and C&A) for development and valida‐ tion of our taxonomy. The four feature clusters discussed above were used for classification as they have the potential of destabilizing a time series (C&A). Table 2 below provides a

Using features from C&A, time series tasks can be classified into four categories with sim‐ ple and complex forecasting tasks being the two ends of this continuum. *Simple forecasting tasks* represent low instability and uncertainty, demonstrate relatively clear structure in their underlying trend patterns, and do not rely on significant domain knowledge to gen‐ erate useful forecasts. In most instances, demographic series such as percentage of male births tend to regress towards a known mean [37], have slow but steady trends, and var‐

Multiple desired states and multiple ways of getting to them. Unstructured problems that require effort to understand. High information load, uncertainty,

and information diversity. Minimal focus for the task executor.

At the other end of the continuum, by contrast, *complex forecasting tasks* are characterized by greater instability and uncertainty, do not demonstrate a clear generating structure, and may require "systematic" integration of a complex set of domain knowledge features that send conflicting signals [26]. For instance, forecasting monetary exchange rates is made chal‐ lenging by the low signal to noise ratio and the non-ergodic nature of the process caused by numerous undetermined underlying drivers [37]. Such series may pose greater cognitive de‐ mand on the forecasters who may find it difficult to isolate features such as trends and insta‐ bilities and recognize underlying processes.

*Moderately complex time series* will fall somewhere along the continuum (see Table 2). These tasks demonstrate some instability, variation about the trend may be higher than for simple series, and/or recent and basic trends may conflict. The structure of such series demonstrates a more complex interplay of domain knowledge than for simple series, thereby lending multiple possible solution schemes depending upon interpretation and application of domain knowl‐ edge. An example of decomposition of UK Highway deaths illustrated these conflicting sce‐ nario possibilities where the decomposition of the series yielded two conflicting elements of domain knowledge – growth in traffic volume and decay in decline in death rate [39]. Decom‐ posing the time series into its components helped improve forecasts for the target series.

be compromised. For instance, [50] evaluated a DSS for treatment of severe head injury pa‐ tients by comparing physician expert opinions with results generated by the DSS. The study concluded that the tool was not accurate enough to support complex decisions in high-stress environments. Similarly, [51] found that providing certain types of cognitive support for re‐ al-time dynamic decision making can degrade performance and designing systems for such

Designing Effective Forecasting Decision Support Systems: Aligning Task Complexity and Technology Support

http://dx.doi.org/10.5772/51255

181

**•** *Practical Proposition 3:* FDSS generated forecasts for complex time series will be more accu‐

**•** *Practical Proposition 4:* FDSS generated forecasts for complex time series will be more accu‐

**•** *Practical Proposition 5:* FDSS generated forecasts for simple time series will be as accurate

While existing forecasting literature has yielded several recommendations for forecast ad‐ justment, once again, this area suffers from lack of sufficient empirical findings regarding adjustment of forecasts for complex and simple tasks. Here too, we rely on multidisciplinary studies and findings from our own studies [52] to support our propositions. Forecasting lit‐ erature, for instance, has suggested that statistically generated forecasts should be adjusted based on relevant domain knowledge and contextual information that practitioners gain through their work environment. Others [53-54] demonstrated that familiarity with the spe‐ cific factors being forecast was most significant in determining accuracy. Judgmental adjust‐ ments should also be applied to statistically generated forecasts under highly uncertain situations or when changes are expected in the forecasting environment, i.e. under condi‐ tions of instability. Both uncertainty and instability, according to our earlier framework in

Managerial involvement in the forecasting process, primarily in the form of judgmental adjust‐ ments, has been questioned in terms of value added benefits. For instance, [55] suggest that benefits of managerial adjustment in stable series may not be justified as automatic statistical forecasts may be sufficiently accurate. In contrast, they recommend high levels of managerial involvement in data that has high uncertainty, in a sense, high complexity surrounding it.

In our own empirical studies comparing FDSS and judgmental forecasting behaviors [52], we find that when given FDSS-generated forecasts, forecaster adjustments to simple series harm forecast accuracy but improve accuracy of complex series when compared to unadjusted FDSS forecasts. Furthermore, when given simple series, forecasters react to complex series by as‐ suming forecast values to be too low and, in response, adjust forecasts more optimistically than necessary. In contrast, they view the forecasts for simpler series to be aggressive and accord‐

**•** *Practical Proposition 6:* Forecasters will adjust complex series more optimistically than sim‐

ingly overcompensate by suppressing the forecasts. Accordingly, we propose:

tasks is challenging. Based on these studies, the following can be proposed:

rate than judgmental forecasts of moderately complex time series.

rate than judgmental forecasts of complex time series.

as judgmental forecasts of simple time series.

*2.2.4. Judgmental Adjustment of FDSS Generated Forecasts*

Table 2, lend complexity to the forecasting environment.

ple series whose forecasts will be suppressed.

#### *2.2.3. Judgmental Accuracy on Complex and Simple Forecasting Tasks*

In earlier sections, we have mentioned the dearth of studies in forecasting on complexity and its implications on performance and outcomes. Consequently, we have relied on general studies in other domains to highlight implications of complexity in forecasting complex ver‐ sus simple tasks. Most fundamentally, [23] defines simple tasks as those that are not com‐ plex. In general, more complex tasks require greater support [15] and richer information presentation [40-43]. Complex tasks increase cognitive overload and place greater informa‐ tion processing requirements on the user, thereby reducing performance [44-45]. Under such situations, decision makers choose "satisficing" but suboptimal alternatives [46] thereby lowering decision accuracy. When task complexity does not match abilities of the decision maker, motivation and consequently, performance may decline [47]. Using a Lens model ap‐ proach, [48] attributed poor judgment in complex task settings to limitations in participants' ability to execute judgment strategies as opposed to their knowledge about the task domain, essentially a lack of experiential acuity. This could be attributable to loss in perceived selfefficacy and efficiency in application of analytical strategies.

In the forecasting domain, studies have uncovered confounding effects in situations that mani‐ fest uncertainty and instability. For instance, [34] found that as the trend, seasonality, and noise increased in a time series, forecasters indicated wider confidence intervals, and hence uncer‐ tainty, in their forecasts. Further, [49] also found that while forecasters successfully identified instability in time series, their forecasts were less accurate than statistical forecasts when such instabilities were present. Considering this, even experienced forecasters may find lowered performance in complex settings. These multidisciplinary findings then suggest:


FDSS, through effective design, can allay the cognitive and human information processing demands that task complexity can place on the decision maker, and thereby potentially in‐ crease system use and confidence. DSS range from simple decision aiding such as using vis‐ ual, as opposed to text-based, presentations to complex intelligent systems that adaptively perceive and respond to the decision context. The alignment between task needs and tech‐ nology support, however, needs reflection. If misaligned, decision maker performance can be compromised. For instance, [50] evaluated a DSS for treatment of severe head injury pa‐ tients by comparing physician expert opinions with results generated by the DSS. The study concluded that the tool was not accurate enough to support complex decisions in high-stress environments. Similarly, [51] found that providing certain types of cognitive support for re‐ al-time dynamic decision making can degrade performance and designing systems for such tasks is challenging. Based on these studies, the following can be proposed:


### *2.2.4. Judgmental Adjustment of FDSS Generated Forecasts*

series, and/or recent and basic trends may conflict. The structure of such series demonstrates a more complex interplay of domain knowledge than for simple series, thereby lending multiple possible solution schemes depending upon interpretation and application of domain knowl‐ edge. An example of decomposition of UK Highway deaths illustrated these conflicting sce‐ nario possibilities where the decomposition of the series yielded two conflicting elements of domain knowledge – growth in traffic volume and decay in decline in death rate [39]. Decom‐ posing the time series into its components helped improve forecasts for the target series.

In earlier sections, we have mentioned the dearth of studies in forecasting on complexity and its implications on performance and outcomes. Consequently, we have relied on general studies in other domains to highlight implications of complexity in forecasting complex ver‐ sus simple tasks. Most fundamentally, [23] defines simple tasks as those that are not com‐ plex. In general, more complex tasks require greater support [15] and richer information presentation [40-43]. Complex tasks increase cognitive overload and place greater informa‐ tion processing requirements on the user, thereby reducing performance [44-45]. Under such situations, decision makers choose "satisficing" but suboptimal alternatives [46] thereby lowering decision accuracy. When task complexity does not match abilities of the decision maker, motivation and consequently, performance may decline [47]. Using a Lens model ap‐ proach, [48] attributed poor judgment in complex task settings to limitations in participants' ability to execute judgment strategies as opposed to their knowledge about the task domain, essentially a lack of experiential acuity. This could be attributable to loss in perceived self-

In the forecasting domain, studies have uncovered confounding effects in situations that mani‐ fest uncertainty and instability. For instance, [34] found that as the trend, seasonality, and noise increased in a time series, forecasters indicated wider confidence intervals, and hence uncer‐ tainty, in their forecasts. Further, [49] also found that while forecasters successfully identified instability in time series, their forecasts were less accurate than statistical forecasts when such instabilities were present. Considering this, even experienced forecasters may find lowered

**•** *Practical Proposition 1:* Judgmental forecasts of complex time series will be less accurate

**•** *Practical Proposition 2:* Judgmental forecasts of moderately complex time series will be less accurate than judgmental forecasts of simple time series but more accurate than those for

FDSS, through effective design, can allay the cognitive and human information processing demands that task complexity can place on the decision maker, and thereby potentially in‐ crease system use and confidence. DSS range from simple decision aiding such as using vis‐ ual, as opposed to text-based, presentations to complex intelligent systems that adaptively perceive and respond to the decision context. The alignment between task needs and tech‐ nology support, however, needs reflection. If misaligned, decision maker performance can

performance in complex settings. These multidisciplinary findings then suggest:

*2.2.3. Judgmental Accuracy on Complex and Simple Forecasting Tasks*

efficacy and efficiency in application of analytical strategies.

than judgmental forecasts of simple time series.

complex time series.

180 Decision Support Systems

While existing forecasting literature has yielded several recommendations for forecast ad‐ justment, once again, this area suffers from lack of sufficient empirical findings regarding adjustment of forecasts for complex and simple tasks. Here too, we rely on multidisciplinary studies and findings from our own studies [52] to support our propositions. Forecasting lit‐ erature, for instance, has suggested that statistically generated forecasts should be adjusted based on relevant domain knowledge and contextual information that practitioners gain through their work environment. Others [53-54] demonstrated that familiarity with the spe‐ cific factors being forecast was most significant in determining accuracy. Judgmental adjust‐ ments should also be applied to statistically generated forecasts under highly uncertain situations or when changes are expected in the forecasting environment, i.e. under condi‐ tions of instability. Both uncertainty and instability, according to our earlier framework in Table 2, lend complexity to the forecasting environment.

Managerial involvement in the forecasting process, primarily in the form of judgmental adjust‐ ments, has been questioned in terms of value added benefits. For instance, [55] suggest that benefits of managerial adjustment in stable series may not be justified as automatic statistical forecasts may be sufficiently accurate. In contrast, they recommend high levels of managerial involvement in data that has high uncertainty, in a sense, high complexity surrounding it.

In our own empirical studies comparing FDSS and judgmental forecasting behaviors [52], we find that when given FDSS-generated forecasts, forecaster adjustments to simple series harm forecast accuracy but improve accuracy of complex series when compared to unadjusted FDSS forecasts. Furthermore, when given simple series, forecasters react to complex series by as‐ suming forecast values to be too low and, in response, adjust forecasts more optimistically than necessary. In contrast, they view the forecasts for simpler series to be aggressive and accord‐ ingly overcompensate by suppressing the forecasts. Accordingly, we propose:

**•** *Practical Proposition 6:* Forecasters will adjust complex series more optimistically than sim‐ ple series whose forecasts will be suppressed.

**•** *Practical Proposition 7:* Adjustments to FDSS-generated forecasts for simple series will harm forecast accuracy.

to high level cognitive needs of decision makers, context of decision making, and task char‐ acteristics [64]. The FTTF framework proposed in this paper provides a task-based approach to such adaptive systems. As a time series is initially input into the FSS, automated feature detection routines can categorize time series along the simple to complex continuum. Task profiles gathered in this way could be used to customize levels of *restrictiveness* and *deci‐*

Designing Effective Forecasting Decision Support Systems: Aligning Task Complexity and Technology Support

http://dx.doi.org/10.5772/51255

183

Restrictiveness is the "degree to which, and the manner in which, a DSS limits its users' de‐ cision making process to a subset of all possible processes" [65, p. 52]. For example, a DSS may restrict access to certain data sets or ability to make judgmental inputs and adjustments to the system. Restrictiveness can be desirable when the intention is to limit harmful deci‐ sion choices and interventions. However, general IS literature has largely recommended limited use of restrictive features in DSS [1, 61, 65-66]. Excessive restrictiveness can result in user frustration and system disuse [65, 67]. It can also be difficult for the designer to deter‐ mine *a-priori* which decision processes will be useful for a particular situation [1]. However, when users are poorly trained [1], known to make bad decision choices, or when underlying

Decisional guidance is "the degree to which, and the manner in which, a DSS guides its users in constructing and executing the decision-making processes by assisting them in choosing and using its operators" [65, p. 57]), can be informative or suggestive. *Informative guidance* provides factual and unbiased information such as visual or text based display of data thereby empowering the user to choose the best course of action. *Suggestive guidance*, on the other hand, recommends an ideal course of action to the user such as by comparing available methods and recommending the one deemed to be most suited to the task at hand. Also [1] provide an excellent and extensive review of decisional guidance features for FSS that we recommend highly. To complement their recommendations, in the next few para‐ graphs, we provide additional design guidelines emergent from the theme of this study.

*A.1 Restrict Where Harmful Judgment can be Applied:* When unrestricted, forecasters are free to apply adjustments at many levels in the forecasting process such as toward data to be used or excluded, models to be applied and those to be ignored, and changes to decision out‐ comes. Similarly, as we demonstrated in our Study 2 [52], inexperienced forecasters may at‐ tempt to overcome their limited knowledge of underlying decision processes by making adjustments to the final outcomes [1]. FSS can restrict where such judgmental adjustments are permitted. Specifically, judgment is best utilized as input into the forecasting process or within the context of a validated knowledge base rather than as an adjustment to the final

*A.2 Restrict FSS Display Based on Task Complexity:* Since complex tasks pose significant de‐ mands on human cognitive and information processing capabilities, FSS displays for such tasks can be restricted as opposed to simple tasks that can benefit from decisional guidance. Since simple tasks create lower cognitive strain, performance on such tasks can potentially be improved by increasing user awareness of the forecasting cues such as by displaying fea‐ tures underlying the time series, generating processes, forecasts from alternative methods, and forecasting knowledge underlying the final forecasts. For instance, [49] found that mak‐

*sional guidance* for simple versus complex tasks.

decision outcome [55].

conditions are stable, restrictive DSS features can be beneficial.

**•** *Practical Proposition 8:* Adjustments to FDSS-generated forecasts for complex series, if exe‐ cuted correctly, can improve forecast accuracy.

As a caveat to the last proposition above, judgmental adjustments to complex forecasts may be best supported by FDSS in a way that the adjustments are structured [53] and validated automatically through improvements in forecast accuracy [35, 39]. In the following sections, we rely on the TTF framework and other DSS studies to propose ways in which FDSS could be best designed to adaptively support simple to complex tasks.
