Analysis of Financial Time Series in Frequency Domain Using Neural Networks

*Stefan Nikolić and Goran Nikolić*

## **Abstract**

Developing new methods for forecasting of time series and application of existing techniques in different areas represents a permanent concern for both researchers and companies that are interested to gain competitive advantages. Financial market analysis is an important thing for investors who invest money on the market and want some kind of security in multiplying their investment. Between the existing techniques, artificial neural networks have proven to be very good in predicting financial market performance. In this chapter, for time series analysis and forecasting of specific values, nonlinear autoregressive exogenous (NARX) neural network is used. As an input to the network, both data in time domain and those in the frequency domain obtained using the Fourier transform are used. After the experiment was performed, the results were compared to determine the potentially best time series for predicting, as well as the convenience of the domain in which better results are obtained.

**Keywords:** financial market, time series, forecasting, currency pair, stock exchange index, NARX neural network, Fourier transform

## **1. Introduction**

The future has five faces: innovation, digitalization, urbanization, community, and humanity. The scientific sector should develop each of them, but one that occupies a leadership position is definitely digitalization. It strives for the future every day and is struggling to overcome professional challenges, but in fact it is already the present. Modern technologies surround all of us, and they are our most reliable partners for the future. Through good-quality work and determination, clients will share with you their business needs and requirements, certain that you will find the right solutions for them.

Nowadays, many companies and organizations are involved in collecting data in large scale, in order to discover the necessary knowledge from them to help managers gain a competitive advantage. Timely and accurate analysis of such data is a difficult task, and it is not always possible to do it using conventional methods. Considering the effect that could be obtained, new horizons are opening, and challenges are created for researchers in order to extract useful information [1].

The concept that is very important and where more companies are investing in development is data science in order to find new ways to discover the real needs,

behaviors, and intentions of the users, as well as their detailed analysis. The analysis, improved by the methods of machine learning and, in general, training the data, gives a complete experience as a mix of business and technology. The main purpose is a good mechanism in order to meet the increasing demands of users and even overcome its challenges, because this is the biggest competitive advantage of the companies of every modern business. Neural networks are certainly an indispensable part of it.

One of the modern directions of the development of information technologies, which is a perspective and which has found an application in practice, is undoubtedly the development of artificial neural networks. Neural networks represent one of the learning models based on the work of biological neural networks such as the human brain. From such a learning model, a system that adapts to changes, which are very common on market, can be made and therefore would have more success. This stems from the desire to create an artificial system capable of performing sophisticated and intelligent calculations and represents a perspective in the future.

The aim of this chapter is to predict the financial time series using a neural network that has been trained and tested both in the foreign exchange market and the stock market. Historical data has been collected and analyzed to create a model that would establish a link between the corresponding variables.

## **2. Methods and techniques of problem solving**

The development of the neural network is currently oriented in two directions. The first is to increase the availability of modern computers and develop software tools for easy use, which enables the rapid development of neural networks by the individuals and the groups that has only basic knowledge about these areas. Other direction is the notable success of neural networks in areas where traditional computer systems have many problems and disadvantages. Nevertheless, there are many other methods that deal with the same or similar problems, so some of them will be listed.

A method that is increasingly used in predicting financial time series is support vector machines (SVM). There are many scientific papers comparing this method with neural networks in that which is more precise, which corresponds better to the set goals and its advantages in relation to the others [2, 3].

As a commonly used method in solving this type of problem, there is also a random walk method. It is used as a financial theory that describes changes in the stock market as accidentally and unpredictably. Changes have a statistical distribution, and an appropriate model is developed. Then statistical testing of the hypothesis is performed, and a certain conclusion is made, whether price changes depend on one another or are completely independent.

In finance, the main problem is unstable nature of observed time series and its heteroscedasticity, making it impossible to apply certain time series models. This study empirically investigates the forecasting performance of generalized autoregressive conditional heteroscedastic (GARCH) model for NASDAQ-100 return over the period of 6 years, which prove to be a financial time series characterized by heteroscedasticity. Volatility performance is found to be significantly improved. Generally, ARCH and GARCH model along with their extensions provide a statistical stage on which many theories of asset pricing, portfolio analysis, value at risk, or index volatility can be exhibited or tested. Volatility has been the subject of many researches in financial markets, especially as an essential input to many financial decision-making models. Investment decisions strongly depend on the forecast of expected returns and volatilities of the assets. The introduction of ARCH model has

**77**

*Analysis of Financial Time Series in Frequency Domain Using Neural Networks*

ing a popular tool for volatility modeling and forecasting [4].

created a new approach and has application for financial econometricians, becom-

Traditionally, Box-Jenkins or autoregressive integrated moving-average (ARIMA) model has been dominating over time series for forecasting the time series and includes the identification, evaluation, and checking of the suitability of the selected time series model. Although it is rather flexible and can be used for a large number of time series, the main limitation is the assumption of the linearity of the model, and it is used to model nonstationary time series. The model cannot explain nonlinear behavior, which is at the core of financial time series. The connection between conventional statistical approaches and neural networks for this use is complementary. The neural network is not transparent and has the corresponding stochastic part. It should be trained several times, after which the average value is taken to see how stable the solution is obtained afterwards. Also, statistical predictive techniques have reached their limitations when it comes to nonlinearity in data, while neural networks increasingly (except in the prediction) are applied in the

Also known as econometric models for time series are generalized autoregressive conditional heteroscedastic and exponential generalized autoregressive conditional heteroscedastic (EGARCH), but in other papers, in comparative analysis they have proved less effective than NARX, so in this paper, they will not be considered or

Neural networks are computer simulations programmed to learn on the basis of available data. They are used to solve a wide range of problems related to clustering, classification, pattern recognition, optimization, function approximation, and prediction. They are characterized by the layers—the input layer, the hidden layer, the output layer from the network, and the connections between all of them. The number of these connections along with the weight coefficients represents the real power of the neural network. Input neurons accept information, while output

• Simple recurrent networks, such as the Elman simple recurrent neural networks

Depending on the algorithm, it determined what kind of network propagation will be in relation to the type of network. The most important thing in this paper is the hidden layer whose number of nodes determines the complexity for which a prediction model is made. The activation function as an indispensable part is necessary for the neural network to be able to learn nonlinear functions, especially because of their importance to the network. Without nonlinearity, the network

By combining linear functions, a linear function is obtained, so it is advisable to choose a nonlinear function for the activation function. The network

*DOI: http://dx.doi.org/10.5772/intechopen.85885*

classification and pattern recognition [6, 7].

neurons generate signals for specific actions [8].

• Single-layer feedforward networks

• Multilayer feedforward networks

• Radial basis function networks

would be able to model only linear data dependencies.

• Self-organizing maps

The types of networks are grouped into five main classes:

**2.1 NARX neural networks**

compared to the network [5].

### *Analysis of Financial Time Series in Frequency Domain Using Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.85885*

created a new approach and has application for financial econometricians, becoming a popular tool for volatility modeling and forecasting [4].

Also known as econometric models for time series are generalized autoregressive conditional heteroscedastic and exponential generalized autoregressive conditional heteroscedastic (EGARCH), but in other papers, in comparative analysis they have proved less effective than NARX, so in this paper, they will not be considered or compared to the network [5].

Traditionally, Box-Jenkins or autoregressive integrated moving-average (ARIMA) model has been dominating over time series for forecasting the time series and includes the identification, evaluation, and checking of the suitability of the selected time series model. Although it is rather flexible and can be used for a large number of time series, the main limitation is the assumption of the linearity of the model, and it is used to model nonstationary time series. The model cannot explain nonlinear behavior, which is at the core of financial time series. The connection between conventional statistical approaches and neural networks for this use is complementary. The neural network is not transparent and has the corresponding stochastic part. It should be trained several times, after which the average value is taken to see how stable the solution is obtained afterwards. Also, statistical predictive techniques have reached their limitations when it comes to nonlinearity in data, while neural networks increasingly (except in the prediction) are applied in the classification and pattern recognition [6, 7].

## **2.1 NARX neural networks**

*Fourier Transforms - Century of Digitalization and Increasing Expectations*

that would establish a link between the corresponding variables.

**2. Methods and techniques of problem solving**

set goals and its advantages in relation to the others [2, 3].

another or are completely independent.

pensable part of it.

will be listed.

behaviors, and intentions of the users, as well as their detailed analysis. The analysis, improved by the methods of machine learning and, in general, training the data, gives a complete experience as a mix of business and technology. The main purpose is a good mechanism in order to meet the increasing demands of users and even overcome its challenges, because this is the biggest competitive advantage of the companies of every modern business. Neural networks are certainly an indis-

One of the modern directions of the development of information technologies, which is a perspective and which has found an application in practice, is undoubtedly the development of artificial neural networks. Neural networks represent one of the learning models based on the work of biological neural networks such as the human brain. From such a learning model, a system that adapts to changes, which are very common on market, can be made and therefore would have more success. This stems from the desire to create an artificial system capable of performing sophisticated and intelligent calculations and represents a perspective in the future. The aim of this chapter is to predict the financial time series using a neural network that has been trained and tested both in the foreign exchange market and the stock market. Historical data has been collected and analyzed to create a model

The development of the neural network is currently oriented in two directions. The first is to increase the availability of modern computers and develop software tools for easy use, which enables the rapid development of neural networks by the individuals and the groups that has only basic knowledge about these areas. Other direction is the notable success of neural networks in areas where traditional computer systems have many problems and disadvantages. Nevertheless, there are many other methods that deal with the same or similar problems, so some of them

A method that is increasingly used in predicting financial time series is support vector machines (SVM). There are many scientific papers comparing this method with neural networks in that which is more precise, which corresponds better to the

As a commonly used method in solving this type of problem, there is also a random walk method. It is used as a financial theory that describes changes in the stock market as accidentally and unpredictably. Changes have a statistical distribution, and an appropriate model is developed. Then statistical testing of the hypothesis is performed, and a certain conclusion is made, whether price changes depend on one

In finance, the main problem is unstable nature of observed time series and its heteroscedasticity, making it impossible to apply certain time series models. This study empirically investigates the forecasting performance of generalized autoregressive conditional heteroscedastic (GARCH) model for NASDAQ-100 return over the period of 6 years, which prove to be a financial time series characterized by heteroscedasticity. Volatility performance is found to be significantly improved. Generally, ARCH and GARCH model along with their extensions provide a statistical stage on which many theories of asset pricing, portfolio analysis, value at risk, or index volatility can be exhibited or tested. Volatility has been the subject of many researches in financial markets, especially as an essential input to many financial decision-making models. Investment decisions strongly depend on the forecast of expected returns and volatilities of the assets. The introduction of ARCH model has

**76**

Neural networks are computer simulations programmed to learn on the basis of available data. They are used to solve a wide range of problems related to clustering, classification, pattern recognition, optimization, function approximation, and prediction. They are characterized by the layers—the input layer, the hidden layer, the output layer from the network, and the connections between all of them. The number of these connections along with the weight coefficients represents the real power of the neural network. Input neurons accept information, while output neurons generate signals for specific actions [8].

The types of networks are grouped into five main classes:


Depending on the algorithm, it determined what kind of network propagation will be in relation to the type of network. The most important thing in this paper is the hidden layer whose number of nodes determines the complexity for which a prediction model is made. The activation function as an indispensable part is necessary for the neural network to be able to learn nonlinear functions, especially because of their importance to the network. Without nonlinearity, the network would be able to model only linear data dependencies.

By combining linear functions, a linear function is obtained, so it is advisable to choose a nonlinear function for the activation function. The network

compares the obtained and expected results and, based on this, if there are differences, modifies the neural connections in order to reduce the difference between the current and the desired output. During the learning process, the existing synaptic weights are corrected in order to get a better and more reliable output. The net is trained continuously, until the samples do not lead to a change in coefficients. As a good and highly efficient predictor of time series, NARX neural networks are used very often. The structure of NARX neural network is shown in **Figure 1**.

Previously, for predicting time series, linear parametric models such as autoregressive (AR), moving-average (MA), or autoregressive integrated moving-average model were used. They were not able to solve problems related to nonstationary signals and signals whose mathematical model is not linear. On the other hand, neural network is a powerful tool when applying to problems whose solutions require knowledge that is difficult to specify and express, but there is sufficient representation in examples and practices.

Nonlinear autoregressive exogenous neural network is a dynamic neural architecture that is used to model nonlinear dynamic systems. The nonlinear autoregressive (NAR) network differs in that it has, besides the standard input, another additional time series with external data, which gives an increased accuracy of the prediction. For applications related to the prediction of time series, it is designed as a feedforward neural network with time delay (TDNN). The equation represented by the NARX model [8] is

$$\mathbf{y}(t) \quad = \, f(\mathbf{y}(t-1), \mathbf{y}(t-2), \mathbf{x}(t-1), \mathbf{x}(t-2)) \tag{1}$$

where is the output of the NARX neural network with delays (2 legs) and is input of the NARX neural network with delays (2 legs).

In the NARX neural network model, multilayer perceptron (MLP) is used. The task of the program is to learn how to assign to the new, unmarked data the accurate output. When the variables that need to be predicted are continuous, then the problem is defined as regression. If the predicted values can only contain a limited set of discrete values, then the problem is defined as a classification. Each time the data is trained, the results can give a different solution considering the initial weight w and the value of the bias b.

### **2.2 Fourier transform**

The methods based on Fourier transform have a great application in all areas of science and engineering. Fourier transform is used in signal processing, for solving differential equations, or in analyzing the dynamics of the market and stock market with the same possibilities. In addition to many other tools, the frequency used

**79**

*Analysis of Financial Time Series in Frequency Domain Using Neural Networks*

along with transformation is convolution, which is often applied in the same areas. It is known that it is not possible to define the product of two random distributions, and there it finds its application, especially in the field of finance (securities) when

Fourier series represents a periodic function as an infinite sum of the sinus and cosine functions in the domain of frequency expressed below (Eq. (2)). The application of the price system of options, which is uniquely determined by the characteristic functions within the Fourier analysis, is shown. To describe, the random stochastic Levi processes are often mentioned in the fields of insurance and finance, as well as the assumption of the Black-Scholes model that the price of the substrate is followed by the geometric Braun motion model. This is precisely one of the disadvantages with the assumption of constant volatility over time. It is difficult to determine whether these are really disadvantages or simply the market is ineffective, which is significant to investors as information about the risk protection they are trying to achieve:

()=<sup>0</sup> +∑ cos(2 /)+∑ sin(2 /) (2)

signals or those whose frequency content changes over time, where the periodic signal should be centered around the integer multiplicity of selection frequencies. Then this signal is divided into smaller time segments and analyzes the frequency content of each individual part. Because of that, there is wavelet transformation with the possibility of dilatation and translation of waves as the basic function of

However, Fourier transform is rarely suitable for the processing of nonstationary

The six Forex major traded currency pairs are EUR/USD, GBP/USD, AUD/USD, USD/CAD, USD/JPY, and USD/CHF. In this chapter for the time series analysis, a pair of EUR/USD was selected considering its share in the total trading volume (27%). Often, cross currency pairs, which do not include the US dollar, have a smaller trading volume and larger spreads than the major currency pairs, so they are

Unlike Forex, which is characterized by large oscillations, it may be better to notice a certain trend that changes slowly over time. Based on this, it might be assumed that the S&P 500 index will show better features related to the prediction

Relevant historical currency pair data for more than 10 years have been downloaded from the website of Fusion Media Limited [10]. In the analysis of time series from the stock exchange, a representative index S&P 500 was used with the histori-

The collected data are related to the prices (high, low, open, close) in the period from 2003 to September 2018, for each day four prices, but the close price will be used in the analysis. The graph of the time series for the S&P 500 stock index in the time domain, returns based on 3950 observations in the period 31/12/2002–

After determining the returns and application of FFT (fast Fourier transform),

The time series graph for the EUR/USD currency pair in the time domain by observing the returns based on 4093 observations in the period 01/01/2003– 07/09/2018 is shown in **Figure 4**. After determining the returns and application of

FFT (fast Fourier transform), the graph shown in **Figure 5** is plotted.

cal data downloaded from the website of Yahoo! Finance [11].

*DOI: http://dx.doi.org/10.5772/intechopen.85885*

performing the necessary formulas.

transformation [9].

less suitable for analysis.

07/09/2018 is shown in **Figure 2**.

the graph shown in **Figure 3** is plotted.

of the series.

**3. Data description and data analysis**

**Figure 1.** *The structure of the NARX model (www.degruyter.com).*

*Analysis of Financial Time Series in Frequency Domain Using Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.85885*

along with transformation is convolution, which is often applied in the same areas. It is known that it is not possible to define the product of two random distributions, and there it finds its application, especially in the field of finance (securities) when performing the necessary formulas.

Fourier series represents a periodic function as an infinite sum of the sinus and cosine functions in the domain of frequency expressed below (Eq. (2)). The application of the price system of options, which is uniquely determined by the characteristic functions within the Fourier analysis, is shown. To describe, the random stochastic Levi processes are often mentioned in the fields of insurance and finance, as well as the assumption of the Black-Scholes model that the price of the substrate is followed by the geometric Braun motion model. This is precisely one of the disadvantages with the assumption of constant volatility over time. It is difficult to determine whether these are really disadvantages or simply the market is ineffective, which is significant to investors as information about the risk protection they are trying to achieve:

$$\mathbf{g}\begin{pmatrix} t \end{pmatrix} = a\_0 + \sum a\_m \cos\left(2\pi mt/T\right) + \sum b\_n \sin\left(2\pi mt/T\right) \tag{2}$$

However, Fourier transform is rarely suitable for the processing of nonstationary signals or those whose frequency content changes over time, where the periodic signal should be centered around the integer multiplicity of selection frequencies. Then this signal is divided into smaller time segments and analyzes the frequency content of each individual part. Because of that, there is wavelet transformation with the possibility of dilatation and translation of waves as the basic function of transformation [9].

## **3. Data description and data analysis**

The six Forex major traded currency pairs are EUR/USD, GBP/USD, AUD/USD, USD/CAD, USD/JPY, and USD/CHF. In this chapter for the time series analysis, a pair of EUR/USD was selected considering its share in the total trading volume (27%). Often, cross currency pairs, which do not include the US dollar, have a smaller trading volume and larger spreads than the major currency pairs, so they are less suitable for analysis.

Unlike Forex, which is characterized by large oscillations, it may be better to notice a certain trend that changes slowly over time. Based on this, it might be assumed that the S&P 500 index will show better features related to the prediction of the series.

Relevant historical currency pair data for more than 10 years have been downloaded from the website of Fusion Media Limited [10]. In the analysis of time series from the stock exchange, a representative index S&P 500 was used with the historical data downloaded from the website of Yahoo! Finance [11].

The collected data are related to the prices (high, low, open, close) in the period from 2003 to September 2018, for each day four prices, but the close price will be used in the analysis. The graph of the time series for the S&P 500 stock index in the time domain, returns based on 3950 observations in the period 31/12/2002– 07/09/2018 is shown in **Figure 2**.

After determining the returns and application of FFT (fast Fourier transform), the graph shown in **Figure 3** is plotted.

The time series graph for the EUR/USD currency pair in the time domain by observing the returns based on 4093 observations in the period 01/01/2003– 07/09/2018 is shown in **Figure 4**. After determining the returns and application of FFT (fast Fourier transform), the graph shown in **Figure 5** is plotted.

*Fourier Transforms - Century of Digitalization and Increasing Expectations*

shown in **Figure 1**.

tion in examples and practices.

by the NARX model [8] is

w and the value of the bias b.

*The structure of the NARX model (www.degruyter.com).*

**2.2 Fourier transform**

compares the obtained and expected results and, based on this, if there are differences, modifies the neural connections in order to reduce the difference between the current and the desired output. During the learning process, the existing synaptic weights are corrected in order to get a better and more reliable output. The net is trained continuously, until the samples do not lead to a change in coefficients. As a good and highly efficient predictor of time series, NARX neural networks are used very often. The structure of NARX neural network is

Previously, for predicting time series, linear parametric models such as autoregressive (AR), moving-average (MA), or autoregressive integrated moving-average model were used. They were not able to solve problems related to nonstationary signals and signals whose mathematical model is not linear. On the other hand, neural network is a powerful tool when applying to problems whose solutions require knowledge that is difficult to specify and express, but there is sufficient representa-

Nonlinear autoregressive exogenous neural network is a dynamic neural architecture that is used to model nonlinear dynamic systems. The nonlinear autoregressive (NAR) network differs in that it has, besides the standard input, another additional time series with external data, which gives an increased accuracy of the prediction. For applications related to the prediction of time series, it is designed as a feedforward neural network with time delay (TDNN). The equation represented

() = ((−1),(−2),(−1),(−2)) (1)

where is the output of the NARX neural network with delays (2 legs) and is

In the NARX neural network model, multilayer perceptron (MLP) is used. The task of the program is to learn how to assign to the new, unmarked data the accurate output. When the variables that need to be predicted are continuous, then the problem is defined as regression. If the predicted values can only contain a limited set of discrete values, then the problem is defined as a classification. Each time the data is trained, the results can give a different solution considering the initial weight

The methods based on Fourier transform have a great application in all areas of science and engineering. Fourier transform is used in signal processing, for solving differential equations, or in analyzing the dynamics of the market and stock market with the same possibilities. In addition to many other tools, the frequency used

input of the NARX neural network with delays (2 legs).

**78**

**Figure 1.**

**Figure 2.** *Time series S&P500 in the time domain.*

#### **Figure 3.**

*Time series S&P500 in the frequency domain.*

From **Figures 2** to **4**, the conclusion is that the time series of the prices is not stationary, while the returns are a stationary time series, as can be seen in **Figures 3** and **5**. It is also concluded that prices don't have the normal distribution and deviate significantly from it, but returns have significantly better statistical characteristics.

In this case, the time series of the returns are much closer to the normal distribution, and the normal distribution with thick tails occurs. This shows that unexpected events occur more often than in the normal distribution, which is characteristic of the analysis of financial data and forecasts.

Linear dependence, which is very important for observation during the analysis of time series, is autocorrelation. In general, there is doubt whether the explanatory variables are determined by a stochastic member or there is an exact linear dependence between the explanatory variables. The absence of autocorrelation means that random errors are uncorrelated and that the covariance between them is equal to 0. This would mean that there is no any pattern in the correlation structure of random errors. Otherwise if there is autocorrelation and covariance is different from 0, then accidental errors are correlated and followed by a recognizable pattern

**81**

is incorrectly set.

shown in **Figures 6** and **7**, respectively.

is mature.

**Figure 5.**

**Figure 4.**

*Analysis of Financial Time Series in Frequency Domain Using Neural Networks*

in movement. In this case the results of the statistical tests are biased, the confidence intervals are imprecise, and the prediction is unreliable. Autocorrelation can also be accurate if it is a consequence of the nature of the data and false if the model

The Ljung-Box Q statistical test is significant for analyzing those time series in which autocorrelation is different from 0. Ideally, a series of errors should be a process with an independent random variable from the same distribution, and there is a white noise; however, often in the series of errors, there is a dependence. The greater absence of autocorrelation or its complete absence indicates that the market

The autocorrelation function of S&P 500 index and EUR/USD currency is

**Figure 6** shows the deviation of the autocorrelation value beyond the confidence interval for the first 2 legs, and therefore, in the network architecture, the default value 2 should be used as a time delay. Due to the lack of statistically significant

*DOI: http://dx.doi.org/10.5772/intechopen.85885*

*Time series of the EUR/USD currency pair in the time domain.*

*Time series of the EUR/USD currency pair in the frequency domain.*

*Analysis of Financial Time Series in Frequency Domain Using Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.85885*

**Figure 4.** *Time series of the EUR/USD currency pair in the time domain.*

**Figure 5.** *Time series of the EUR/USD currency pair in the frequency domain.*

in movement. In this case the results of the statistical tests are biased, the confidence intervals are imprecise, and the prediction is unreliable. Autocorrelation can also be accurate if it is a consequence of the nature of the data and false if the model is incorrectly set.

The Ljung-Box Q statistical test is significant for analyzing those time series in which autocorrelation is different from 0. Ideally, a series of errors should be a process with an independent random variable from the same distribution, and there is a white noise; however, often in the series of errors, there is a dependence. The greater absence of autocorrelation or its complete absence indicates that the market is mature.

The autocorrelation function of S&P 500 index and EUR/USD currency is shown in **Figures 6** and **7**, respectively.

**Figure 6** shows the deviation of the autocorrelation value beyond the confidence interval for the first 2 legs, and therefore, in the network architecture, the default value 2 should be used as a time delay. Due to the lack of statistically significant

*Fourier Transforms - Century of Digitalization and Increasing Expectations*

From **Figures 2** to **4**, the conclusion is that the time series of the prices is not stationary, while the returns are a stationary time series, as can be seen in **Figures 3** and **5**. It is also concluded that prices don't have the normal distribution and deviate significantly from it, but returns have significantly better statistical characteristics. In this case, the time series of the returns are much closer to the normal distribution, and the normal distribution with thick tails occurs. This shows that unexpected events occur more often than in the normal distribution, which is

Linear dependence, which is very important for observation during the analysis of time series, is autocorrelation. In general, there is doubt whether the explanatory variables are determined by a stochastic member or there is an exact linear dependence between the explanatory variables. The absence of autocorrelation means that random errors are uncorrelated and that the covariance between them is equal to 0. This would mean that there is no any pattern in the correlation structure of random errors. Otherwise if there is autocorrelation and covariance is different from 0, then accidental errors are correlated and followed by a recognizable pattern

characteristic of the analysis of financial data and forecasts.

**80**

**Figure 2.**

**Figure 3.**

*Time series S&P500 in the time domain.*

*Time series S&P500 in the frequency domain.*

**Figure 6.** *Autocorrelation function of returns for time series S&P 500.*

**Figure 7.** *Autocorrelation function of returns for time series EUR/USD.*

autocorrelation in the data, the NARX neural network will be used for analyzing the time series.

Observing variances of random errors and their differentiation by individual observations, there is the phenomenon of heteroscedasticity. The cause of this phenomenon may be specification errors, exclusion of an important regressor whose influence will be covered by the error or the existence of extreme values in the sample. As a method of elimination, the method of the least squares is applied. The idea is that in the process of minimizing the sum of the quadrate of the residual, a smaller weight is given to those residues that are greater by absolute value and vice versa.

Engle's ARCH test allows to see if there is heteroscedasticity or not. For the obtained value 1 as a result of the test, it was established for both time series that the zero hypothesis is rejected (the residual series does not show heteroscedasticity), so it can be concluded that it exists in both time series.

**83**

**Figure 8.**

*Analysis of Financial Time Series in Frequency Domain Using Neural Networks*

In this section, a brief review of well-known and useful mathematical tools from

The tool used is MATLAB® with a special set of functions known as the Neural Network Toolbox applicable to finance. With the help of the functions, a training, evaluation, and test set can be generated from the original set with the corresponding percentile division. Then, several NARX networks are generated that are trained on train data. Subsequently, networks are evaluated on the evaluation data in order to determine the network with appropriate behavior and predict this behavior on

The NARX model can be implemented in many ways, but the simpler is developed by using a feedforward neural network with the embedded memory plus a delayed connection from the output of the second layer to input. In practice it was observed that forecasting of a time series will be enhanced by analyzing related time series. A two-layered feedforward network is used, where the sigmoid function is in a hidden layer and that is the most common form of a transmission function, which is nondecreasing and nonlinear. The linear transfer function is in

The prediction method in the given experiment applies to changes in the exchange rate or changes in the stock exchange index over a certain period of time. The goal is to go beyond the assumption and to notice the specific pattern of observations along with the usual fluctuations. These fluctuations would mean that a certain inheritance or some kind of random variation occurred over a period of time. Finally, based on the data, a series with damped random fluctuations should be obtained, which indicates exactly the long-term trend or trend present in the time series, and then it is used to predict the future values of the

Levenberg-Marquardt (LMA), a combination of gradient descent and Gauss-Newton algorithm, is used as an algorithm for learning, as opposed to Elman's recurrent networks, using gradient discent with a momentum. It is known as the advanced and fast algorithm for nonlinear optimization, whereby, unlike

the field of machine learning is presented. For predicting indexes and prices on Forex and stock exchanges, NARX neural network architecture is developed. The input data for the analysis both in the time domain and in the frequency domain are

obtained after applying the Fourier transform to the historical data [12, 13].

**4. Development of the NARX network architecture**

the output layer. The neural network is shown in **Figure 8**.

*The structure of two-layered feedforward network (www.mathworks.com).*

*DOI: http://dx.doi.org/10.5772/intechopen.85885*

the test set of data.

time series.

## **4. Development of the NARX network architecture**

*Fourier Transforms - Century of Digitalization and Increasing Expectations*

autocorrelation in the data, the NARX neural network will be used for analyzing

Engle's ARCH test allows to see if there is heteroscedasticity or not. For the obtained value 1 as a result of the test, it was established for both time series that the zero hypothesis is rejected (the residual series does not show heteroscedasticity), so

Observing variances of random errors and their differentiation by individual observations, there is the phenomenon of heteroscedasticity. The cause of this phenomenon may be specification errors, exclusion of an important regressor whose influence will be covered by the error or the existence of extreme values in the sample. As a method of elimination, the method of the least squares is applied. The idea is that in the process of minimizing the sum of the quadrate of the residual, a smaller weight is given to those residues that are greater by abso-

**82**

the time series.

**Figure 7.**

**Figure 6.**

*Autocorrelation function of returns for time series S&P 500.*

lute value and vice versa.

it can be concluded that it exists in both time series.

*Autocorrelation function of returns for time series EUR/USD.*

In this section, a brief review of well-known and useful mathematical tools from the field of machine learning is presented. For predicting indexes and prices on Forex and stock exchanges, NARX neural network architecture is developed. The input data for the analysis both in the time domain and in the frequency domain are obtained after applying the Fourier transform to the historical data [12, 13].

The tool used is MATLAB® with a special set of functions known as the Neural Network Toolbox applicable to finance. With the help of the functions, a training, evaluation, and test set can be generated from the original set with the corresponding percentile division. Then, several NARX networks are generated that are trained on train data. Subsequently, networks are evaluated on the evaluation data in order to determine the network with appropriate behavior and predict this behavior on the test set of data.

The NARX model can be implemented in many ways, but the simpler is developed by using a feedforward neural network with the embedded memory plus a delayed connection from the output of the second layer to input. In practice it was observed that forecasting of a time series will be enhanced by analyzing related time series. A two-layered feedforward network is used, where the sigmoid function is in a hidden layer and that is the most common form of a transmission function, which is nondecreasing and nonlinear. The linear transfer function is in the output layer. The neural network is shown in **Figure 8**.

The prediction method in the given experiment applies to changes in the exchange rate or changes in the stock exchange index over a certain period of time. The goal is to go beyond the assumption and to notice the specific pattern of observations along with the usual fluctuations. These fluctuations would mean that a certain inheritance or some kind of random variation occurred over a period of time. Finally, based on the data, a series with damped random fluctuations should be obtained, which indicates exactly the long-term trend or trend present in the time series, and then it is used to predict the future values of the time series.

Levenberg-Marquardt (LMA), a combination of gradient descent and Gauss-Newton algorithm, is used as an algorithm for learning, as opposed to Elman's recurrent networks, using gradient discent with a momentum. It is known as the advanced and fast algorithm for nonlinear optimization, whereby, unlike

**Figure 8.** *The structure of two-layered feedforward network (www.mathworks.com).*

the Quasi-Newton algorithm, LMA does not need to compute Hessian matrix, so it has significantly better performance. The Jacobian matrix, which contains the first network error, is used, and it is expressed by a backpropagation algorithm, which is easier than calculation of the Hessian matrix. It is necessary to reach the proximity of the minimal error function and get closer as soon as possible [14].

The data for analysis are divided in the following way: 70% training, 15% evaluation, and 15% test.

After training the network, the results are shown in **Figures 9–11**. The epoch represents the number of iterations during the training in which it was attempted to minimize the error function.

The network architecture is such that the initial number of hidden neurons is set to 10 with 2 time delays. The network will be applied to returns instead of prices for both time series that are observed in the time and frequency domain. The smallest mean squared error occurred in the third epoch and is 1.11455 × 10<sup>−</sup><sup>4</sup> . It represents a deviation of the predicted value in relation to the actual value. If the number is closer to 0, it means that the results obtained are more accurate.

The training error is significantly higher than the error during testing, which means that the model did not overfitting as shown in **Figures 10** and **11**.

After ten consecutive training of the network, the smallest mean squared error after appeared in the seventh epoch and is 1.11092 × 10<sup>−</sup><sup>4</sup> . As in the analysis of the previous time series, the same training algorithm was used, and the subsets for training, validation, and testing were obtained for the same percentile values. The network architecture is identical with sigmoid function in the hidden and linear function in the output layer. In the analysis of this time series, the smallest mean squared error occurred in the ninth epoch and is 3.71 × 10<sup>−</sup><sup>5</sup> . It represented the deviation of the predicted values in relation to the actual value.

The first network for the stock exchange index S&P 500 was tested as a feedforward network. The smallest MSE for training was 1.23081 × 10<sup>−</sup><sup>4</sup> ; for validation, 1.0336 × 10<sup>−</sup><sup>4</sup> ; and for testing, 1.1380 × 10<sup>−</sup><sup>4</sup> . The network for the currency pair EUR/USD was tested also as a feedforward network. The smallest MSE was smaller than for the first network: 3.6199 × 10<sup>−</sup><sup>5</sup> for training, 3.4246 × 10<sup>−</sup><sup>5</sup> for validation, and 3.4792 × 10<sup>−</sup><sup>5</sup> for testing.

The algorithm is also trained at 70% of the data, evaluated at 15%, and tested at 15%. Each network consists of two hidden layers. The first hidden

**85**

**Figure 11.**

**Figure 10.**

*Analysis of Financial Time Series in Frequency Domain Using Neural Networks*

layer has ten neurons with a sigmoid transfer function, and the other one is a neuron with a linear transfer function. In the second network, a smaller average mean squared error was detected than in the first one. Also, the standard deviation of the secondary squared error for the other network is lower than for the first one for all three stages of training, validation, and testing, respectively. The results for each iteration and summary of mean squared error are

The results for each iteration and summary of mean squared error are presented

Unlike the analysis of time series in the time domain, in the frequency domain,

it is interesting to consider the spectrum of the amplitude (relative share of a certain frequency component relative to the other) of the historical price for the stock index S&P 500 and the currency pair EUR/USD in several different aspects. These analyses include the spectral analysis of time series, which are usually used for stationary time series. This is a good assumption for adjusted stock prices in the

presented in **Tables 1** and **2** for S&P 500.

*Histogram of time series errors for time series EUR/USD.*

*Histogram of time series errors for time series S&P 500*.

frequency domain statistics [15].

in **Tables 3** and **4** for EUR/USD currency pair, respectively.

*DOI: http://dx.doi.org/10.5772/intechopen.85885*

**Figure 9.** *Mean squared error with best validation performance.*

*Analysis of Financial Time Series in Frequency Domain Using Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.85885*

**Figure 10.** *Histogram of time series errors for time series S&P 500*.

**Figure 11.** *Histogram of time series errors for time series EUR/USD.*

layer has ten neurons with a sigmoid transfer function, and the other one is a neuron with a linear transfer function. In the second network, a smaller average mean squared error was detected than in the first one. Also, the standard deviation of the secondary squared error for the other network is lower than for the first one for all three stages of training, validation, and testing, respectively. The results for each iteration and summary of mean squared error are presented in **Tables 1** and **2** for S&P 500.

The results for each iteration and summary of mean squared error are presented in **Tables 3** and **4** for EUR/USD currency pair, respectively.

Unlike the analysis of time series in the time domain, in the frequency domain, it is interesting to consider the spectrum of the amplitude (relative share of a certain frequency component relative to the other) of the historical price for the stock index S&P 500 and the currency pair EUR/USD in several different aspects. These analyses include the spectral analysis of time series, which are usually used for stationary time series. This is a good assumption for adjusted stock prices in the frequency domain statistics [15].

*Fourier Transforms - Century of Digitalization and Increasing Expectations*

possible [14].

1.0336 × 10<sup>−</sup><sup>4</sup>

and 3.4792 × 10<sup>−</sup><sup>5</sup>

ation, and 15% test.

minimize the error function.

the Quasi-Newton algorithm, LMA does not need to compute Hessian matrix, so it has significantly better performance. The Jacobian matrix, which contains the first network error, is used, and it is expressed by a backpropagation algorithm, which is easier than calculation of the Hessian matrix. It is necessary to reach the proximity of the minimal error function and get closer as soon as

The data for analysis are divided in the following way: 70% training, 15% evalu-

The network architecture is such that the initial number of hidden neurons is set to 10 with 2 time delays. The network will be applied to returns instead of prices for both time series that are observed in the time and frequency domain. The smallest

. It represents

; for validation,

for validation,

. As in the analysis of the

. It represented the

. The network for the currency pair

for training, 3.4246 × 10<sup>−</sup><sup>5</sup>

After training the network, the results are shown in **Figures 9–11**. The epoch represents the number of iterations during the training in which it was attempted to

a deviation of the predicted value in relation to the actual value. If the number is

previous time series, the same training algorithm was used, and the subsets for training, validation, and testing were obtained for the same percentile values. The network architecture is identical with sigmoid function in the hidden and linear function in the output layer. In the analysis of this time series, the smallest mean

The first network for the stock exchange index S&P 500 was tested as a feed-

EUR/USD was tested also as a feedforward network. The smallest MSE was smaller

The algorithm is also trained at 70% of the data, evaluated at 15%, and tested at 15%. Each network consists of two hidden layers. The first hidden

The training error is significantly higher than the error during testing, which

After ten consecutive training of the network, the smallest mean squared error

mean squared error occurred in the third epoch and is 1.11455 × 10<sup>−</sup><sup>4</sup>

means that the model did not overfitting as shown in **Figures 10** and **11**.

closer to 0, it means that the results obtained are more accurate.

after appeared in the seventh epoch and is 1.11092 × 10<sup>−</sup><sup>4</sup>

squared error occurred in the ninth epoch and is 3.71 × 10<sup>−</sup><sup>5</sup>

; and for testing, 1.1380 × 10<sup>−</sup><sup>4</sup>

than for the first network: 3.6199 × 10<sup>−</sup><sup>5</sup>

*Mean squared error with best validation performance.*

for testing.

deviation of the predicted values in relation to the actual value.

forward network. The smallest MSE for training was 1.23081 × 10<sup>−</sup><sup>4</sup>

**84**

**Figure 9.**

#### *Fourier Transforms - Century of Digitalization and Increasing Expectations*


#### **Table 1.**

*Mean squared error—S&P 500.*


#### **Table 2.**

*Summary—S&P 500.*


#### **Table 3.**

*Mean squared error—EUR/USD.*

For converting to the frequency fk, it should be emphasized that, if daily prices are used as an input signal, the sampling frequency is equal to 1 [1/day], which means that the frequencies must be reallocated.

**87**

*Analysis of Financial Time Series in Frequency Domain Using Neural Networks*

**Summary Mean squared error**

The unit of a new set of discrete frequencies is [1/day] and has the form of the real frequencies required in this analysis. Also, according to the sampling theorem, it is known that only those signal components who having a frequency less than or equal

Min 3.6199 × 10<sup>−</sup><sup>5</sup> 3.4246 × 10<sup>−</sup><sup>5</sup> 3.4792 × 10<sup>−</sup><sup>5</sup> Max 3.8302 × 10<sup>−</sup><sup>5</sup> 3.7924 × 10<sup>−</sup><sup>5</sup> 4.1646 × 10<sup>−</sup><sup>5</sup> Average 3.7548 × 10<sup>−</sup><sup>5</sup> 3.6427 × 10<sup>−</sup><sup>5</sup> 3.7739 × 10<sup>−</sup><sup>5</sup> Standard deviation 7.3840 × 10<sup>−</sup><sup>7</sup> 1.3108 × 10<sup>−</sup><sup>6</sup> 1.8784 × 10<sup>−</sup><sup>6</sup>

facts, it is necessary to limit the frequency coordinates to the range from 0 to 0.5. In order to better understand the shape of the spectrum, a log-log scale is used, and logarithm of the amplitude values obtained after application of FFT is used. Observing the slope of such a curve could be observed if the spectrum of the amplitude is close to the special power-law form 1/f. Using a logarithmic format is a

After applying FFT on prices and returns, equivalent time series in the frequency domain are obtained. As in the above procedure, in order to better detect the spectrum, a modulus representing the amplitude was found, and then the result was logarithmic. The obtained values of the S&P 500 index and EUR/ USD currency pair were used to train the NARX neural network. The average mean squared error obtained after ten consecutive training is 1.5738 × 10<sup>−</sup><sup>1</sup>

the one obtained in the time domain. The conclusion is that, regardless of the time series being analyzed, the results are significantly worse and the prediction

Due to its wide practical application in various fields, Fourier transform is increasingly in the focus of international scientific meetings, as well as numerous publications (scientific monographs, journals, chapters, etc.), whether it is eco-

Considering the domain in which one of the methods of computational intelligence is applied in this chapter, other methods are often applied. Bankruptcy prediction is one of the main issues threatening many companies and governments and a complex process that consists of numerous inseparable factors. Financial distress begins when an organization is unable to meet its scheduled payments or when the projection of future cash flows points to an inability to meet the payments in the near future. The causes leading to business failure and subsequent bankruptcy can be divided into economic, financial, fraud, disaster, and others. With more accurate

nomics, biomedicine, chemical engineering, electronics, or art [16].

**5. Various computational intelligence methods in finance**

The simulation performed with the input that represents the logarithmic value of the amplitude and the frequency as an exogenous input did not show the possibility of good training and convergence even after the maximum possible 1000 iterations or the corresponding statistical characteristics, and hence, its analysis

good way to avoid overestimating high-frequency components.

, without aliasing effect, will be measured. Considering these

**Train Validation Test**

, respectively, which represents a significantly higher number than

and

*DOI: http://dx.doi.org/10.5772/intechopen.85885*

to Fs/2 = 0.5 days<sup>−</sup><sup>1</sup>

*Summary—EUR/USD.*

**Table 4.**

4.8713 × 10<sup>−</sup><sup>1</sup>

is less reliable.

would make no sense.


*Analysis of Financial Time Series in Frequency Domain Using Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.85885*

#### **Table 4.**

*Fourier Transforms - Century of Digitalization and Increasing Expectations*

**Iterations Mean squared error**

**Summary Mean squared error**

**Iterations Mean squared error**

Min 1.2308 × 10<sup>−</sup><sup>4</sup> 1.0336 × 10<sup>−</sup><sup>4</sup> 1.1380 × 10<sup>−</sup><sup>4</sup> Max 1.3680 × 10<sup>−</sup><sup>4</sup> 1.5599 × 10<sup>−</sup><sup>4</sup> 8.7369 × 10<sup>−</sup><sup>4</sup> Average 1.2923 × 10<sup>−</sup><sup>4</sup> 1.1841 × 10<sup>−</sup><sup>4</sup> 2.1569 × 10<sup>−</sup><sup>4</sup> Standard deviation 4.9307 × 10<sup>−</sup><sup>6</sup> 1.5685 × 10<sup>−</sup><sup>5</sup> 2.3228 × 10<sup>−</sup><sup>4</sup>

 3.6199 × 10<sup>−</sup><sup>5</sup> 3.7105 × 10<sup>−</sup><sup>5</sup> 4.1646 × 10<sup>−</sup><sup>5</sup> 3.7100 × 10<sup>−</sup><sup>5</sup> 3.7924 × 10<sup>−</sup><sup>5</sup> 3.8488 × 10<sup>−</sup><sup>5</sup> 3.8090 × 10<sup>−</sup><sup>5</sup> 3.6691 × 10<sup>−</sup><sup>5</sup> 3.7361 × 10<sup>−</sup><sup>5</sup> 3.7694 × 10<sup>−</sup><sup>5</sup> 3.4246 × 10<sup>−</sup><sup>5</sup> 3.8251 × 10<sup>−</sup><sup>5</sup> 3.6808 × 10<sup>−</sup><sup>5</sup> 3.7144 × 10<sup>−</sup><sup>5</sup> 3.8759 × 10<sup>−</sup><sup>5</sup> 3.8302 × 10<sup>−</sup><sup>5</sup> 3.5430 × 10<sup>−</sup><sup>5</sup> 3.4792 × 10<sup>−</sup><sup>5</sup> 3.7862 × 10<sup>−</sup><sup>5</sup> 3.4881 × 10<sup>−</sup><sup>5</sup> 3.7759 × 10<sup>−</sup><sup>5</sup> 3.6938 × 10<sup>−</sup><sup>5</sup> 3.7867 × 10<sup>−</sup><sup>5</sup> 3.7924 × 10<sup>−</sup><sup>5</sup> 3.8322 × 10<sup>−</sup><sup>5</sup> 3.7484 × 10<sup>−</sup><sup>5</sup> 3.6947 × 10<sup>−</sup><sup>5</sup> 3.8169 × 10<sup>−</sup><sup>5</sup> 3.5506 × 10<sup>−</sup><sup>5</sup> 3.5472 × 10<sup>−</sup><sup>5</sup>

**Train Validation Test**

**Train Validation Test**

**Train Validation Test**

 1.3568 × 10<sup>−</sup><sup>4</sup> 1.1455 × 10<sup>−</sup><sup>4</sup> 1.1280 × 10<sup>−</sup><sup>4</sup> 1.3680 × 10<sup>−</sup><sup>4</sup> 1.1922 × 10<sup>−</sup><sup>4</sup> 8.7396 × 10<sup>−</sup><sup>4</sup> 1.3512 × 10<sup>−</sup><sup>4</sup> 1.1848 × 10<sup>−</sup><sup>4</sup> 1.1948 × 10<sup>−</sup><sup>4</sup> 1.2437 × 10<sup>−</sup><sup>4</sup> 1.0698 × 10<sup>−</sup><sup>4</sup> 1.6513 × 10<sup>−</sup><sup>4</sup> 1.2820 × 10<sup>−</sup><sup>4</sup> 1.0336 × 10<sup>−</sup><sup>4</sup> 1.5894 × 10<sup>−</sup><sup>4</sup> 1.2941 × 10<sup>−</sup><sup>4</sup> 1.5599 × 10<sup>−</sup><sup>4</sup> 1.2687 × 10<sup>−</sup><sup>4</sup> 1.2601 × 10<sup>−</sup><sup>4</sup> 1.3396 × 10<sup>−</sup><sup>4</sup> 1.3046 × 10<sup>−</sup><sup>4</sup> 1.2619 × 10<sup>−</sup><sup>4</sup> 1.0994 × 10<sup>−</sup><sup>4</sup> 1.5612 × 10<sup>−</sup><sup>4</sup> 1.2308 × 10<sup>−</sup><sup>4</sup> 1.1070 × 10<sup>−</sup><sup>4</sup> 1.7836 × 10<sup>−</sup><sup>4</sup> 1.2748 × 10<sup>−</sup><sup>4</sup> 1.1092 × 10<sup>−</sup><sup>4</sup> 1.3480 × 10<sup>−</sup><sup>4</sup>

For converting to the frequency fk, it should be emphasized that, if daily prices are used as an input signal, the sampling frequency is equal to 1 [1/day], which

means that the frequencies must be reallocated.

**86**

**Table 3.**

*Mean squared error—EUR/USD.*

**Table 2.**

**Table 1.**

*Mean squared error—S&P 500.*

*Summary—S&P 500.*

*Summary—EUR/USD.*

The unit of a new set of discrete frequencies is [1/day] and has the form of the real frequencies required in this analysis. Also, according to the sampling theorem, it is known that only those signal components who having a frequency less than or equal to Fs/2 = 0.5 days<sup>−</sup><sup>1</sup> , without aliasing effect, will be measured. Considering these facts, it is necessary to limit the frequency coordinates to the range from 0 to 0.5.

In order to better understand the shape of the spectrum, a log-log scale is used, and logarithm of the amplitude values obtained after application of FFT is used. Observing the slope of such a curve could be observed if the spectrum of the amplitude is close to the special power-law form 1/f. Using a logarithmic format is a good way to avoid overestimating high-frequency components.

After applying FFT on prices and returns, equivalent time series in the frequency domain are obtained. As in the above procedure, in order to better detect the spectrum, a modulus representing the amplitude was found, and then the result was logarithmic. The obtained values of the S&P 500 index and EUR/ USD currency pair were used to train the NARX neural network. The average mean squared error obtained after ten consecutive training is 1.5738 × 10<sup>−</sup><sup>1</sup> and 4.8713 × 10<sup>−</sup><sup>1</sup> , respectively, which represents a significantly higher number than the one obtained in the time domain. The conclusion is that, regardless of the time series being analyzed, the results are significantly worse and the prediction is less reliable.

The simulation performed with the input that represents the logarithmic value of the amplitude and the frequency as an exogenous input did not show the possibility of good training and convergence even after the maximum possible 1000 iterations or the corresponding statistical characteristics, and hence, its analysis would make no sense.

Due to its wide practical application in various fields, Fourier transform is increasingly in the focus of international scientific meetings, as well as numerous publications (scientific monographs, journals, chapters, etc.), whether it is economics, biomedicine, chemical engineering, electronics, or art [16].

## **5. Various computational intelligence methods in finance**

Considering the domain in which one of the methods of computational intelligence is applied in this chapter, other methods are often applied. Bankruptcy prediction is one of the main issues threatening many companies and governments and a complex process that consists of numerous inseparable factors. Financial distress begins when an organization is unable to meet its scheduled payments or when the projection of future cash flows points to an inability to meet the payments in the near future. The causes leading to business failure and subsequent bankruptcy can be divided into economic, financial, fraud, disaster, and others. With more accurate

bankruptcy detection techniques, companies could take some preventive measures in order to minimize the risk of falling to bankruptcy [17].

There are two dominant approaches when it comes to predicting bankruptcy: one that used multi-discriminant analysis, univariate approach (net income to total debt has highest predictive ability), and developing stochastic model such as logit and probit. The other one approach refers to using artificial intelligence and adapts it for predicting bankruptcy (decision tree, fuzzy set theory, genetic algorithm, and support vector machine). Also neural networks such as BPNN (backpropagationtrained neural network), PNN (probabilistic neural networks), or SOM (self-organizing map) could be developed. In this paper, three LC models are tested whether they are able to improve Altman Z-score as a benchmark model for bankruptcy prediction. Even though LC method shows more accurate results, Altman model behaves slightly better for gray-zone companies, where it is important to reduce number of bankrupt firms identified as an active.

In modern approaches it is necessary to introduce different approaches to modeling similarity specially using IBA with two main steps to perform it. First thing is data preprocessing (data normalization, detection of attribute nature, and their potential interaction), where normalization functions may be adapted depending on data range and distribution. Also, it is recommended to use correlation to detect similar nature between attribute data, because the existence of significant correlation in attribute data could overemphasize certain attributes and cause incoherent model results. IBA similarity modeling (attribute-by-attribute comparison, comparison on the level of the object and general approach) show what kind of aggregation is appropriate for similarity modeling.

In this case it is proven that IBA-based similarity framework has a solid mathematical background and can also be expanded to model nonmonotonic inference. The practical advantage is evaluated on two numerical examples. The first example confirms motivation and reasoning behind the novel OL comparison with importance of when one object's attributes is logically dependent or can be compensated by another attribute. In the second example the proposed similarity framework is applied for predicting corporate bankruptcy with different KNN classifiers [18].

## **6. Conclusion**

Analysis of time series is a specific topic, which is indispensable in dealing with the data science and statistical analysis. By combining an analysis with a tool such as a neural network, especially in an increasingly important area such as finance, it is certain that in the future it can conquer new territories and have a global impact. Looking for the financial protection from losses and safe investments without risky investment, it is necessary to apply modern methods with continuous upgrading and improvement. In cooperation with existing platform with varied parameters and transactional data, this tool would be a good prerequisite for successful forecasting of trends and secure business.

The obtained results of the time series analysis confirmed the possibility of a good prediction. Better forecasting can be done for time series in Forex (EUR/ USD), in the time domain without applying Fourier transform to input data. In this sense, NARX proved to be a good method for solving the given type of problem in the time domain, but in the frequency domain, it is recommended that the analysis be carried out by a classical feedforward neural network with the backpropagation algorithm. The results of the research indicated that NARX is capable of providing a certain amount of security to those entities that invest their funds, as well as to point out future expectations. On the other hand, the results of this paper give only

**89**

provided the original work is properly cited.

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

*Analysis of Financial Time Series in Frequency Domain Using Neural Networks*

returns to the previous state where it was before the news arrived. Proposals for the improvement of the neural network are:

The author declares that there are no conflicting interests.

this network in a dynamic environment.

function in the hidden and output layer.

currency pair or stock exchange index.

1,2\* and Goran Nikolić

3

2 Faculty of Organizational Sciences, University of Belgrade, Belgrade, Serbia

1 Enetel Solutions, Roaming Solutions Group, Belgrade, Serbia

3 Faculty of Technology, University of Niš, Leskovac, Serbia

\*Address all correspondence to: stefan.nikolic1995@live.com

**Conflict of interest**

**Author details**

Stefan Nikolić

a proposal and advice on how to behave on the market during trading. It should always be cautious, given the already mentioned market variability. Timeliness is also important, because when a particular news arrives on the market, then it reacts to certain changes. The news is then incorporated into the price and the market

• Include new input parameters that can be reached by new research, or do a different preparation of data for the training to make sure of the credibility of

• Change the number of neurons in the hidden layer, time delay, or activation

• Use network results as entering the new network together with a change in the time period, which can give a broader picture of the trend of the observed

*DOI: http://dx.doi.org/10.5772/intechopen.85885*

*Analysis of Financial Time Series in Frequency Domain Using Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.85885*

a proposal and advice on how to behave on the market during trading. It should always be cautious, given the already mentioned market variability. Timeliness is also important, because when a particular news arrives on the market, then it reacts to certain changes. The news is then incorporated into the price and the market returns to the previous state where it was before the news arrived.

Proposals for the improvement of the neural network are:


## **Conflict of interest**

*Fourier Transforms - Century of Digitalization and Increasing Expectations*

in order to minimize the risk of falling to bankruptcy [17].

number of bankrupt firms identified as an active.

tion is appropriate for similarity modeling.

casting of trends and secure business.

bankruptcy detection techniques, companies could take some preventive measures

There are two dominant approaches when it comes to predicting bankruptcy: one that used multi-discriminant analysis, univariate approach (net income to total debt has highest predictive ability), and developing stochastic model such as logit and probit. The other one approach refers to using artificial intelligence and adapts it for predicting bankruptcy (decision tree, fuzzy set theory, genetic algorithm, and support vector machine). Also neural networks such as BPNN (backpropagationtrained neural network), PNN (probabilistic neural networks), or SOM (self-organizing map) could be developed. In this paper, three LC models are tested whether they are able to improve Altman Z-score as a benchmark model for bankruptcy prediction. Even though LC method shows more accurate results, Altman model behaves slightly better for gray-zone companies, where it is important to reduce

In modern approaches it is necessary to introduce different approaches to modeling similarity specially using IBA with two main steps to perform it. First thing is data preprocessing (data normalization, detection of attribute nature, and their potential interaction), where normalization functions may be adapted depending on data range and distribution. Also, it is recommended to use correlation to detect similar nature between attribute data, because the existence of significant correlation in attribute data could overemphasize certain attributes and cause incoherent model results. IBA similarity modeling (attribute-by-attribute comparison, comparison on the level of the object and general approach) show what kind of aggrega-

In this case it is proven that IBA-based similarity framework has a solid mathematical background and can also be expanded to model nonmonotonic inference. The practical advantage is evaluated on two numerical examples. The first example confirms motivation and reasoning behind the novel OL comparison with importance of when one object's attributes is logically dependent or can be compensated by another attribute. In the second example the proposed similarity framework is applied for predicting corporate bankruptcy with different KNN classifiers [18].

Analysis of time series is a specific topic, which is indispensable in dealing with the data science and statistical analysis. By combining an analysis with a tool such as a neural network, especially in an increasingly important area such as finance, it is certain that in the future it can conquer new territories and have a global impact. Looking for the financial protection from losses and safe investments without risky investment, it is necessary to apply modern methods with continuous upgrading and improvement. In cooperation with existing platform with varied parameters and transactional data, this tool would be a good prerequisite for successful fore-

The obtained results of the time series analysis confirmed the possibility of a good prediction. Better forecasting can be done for time series in Forex (EUR/ USD), in the time domain without applying Fourier transform to input data. In this sense, NARX proved to be a good method for solving the given type of problem in the time domain, but in the frequency domain, it is recommended that the analysis be carried out by a classical feedforward neural network with the backpropagation algorithm. The results of the research indicated that NARX is capable of providing a certain amount of security to those entities that invest their funds, as well as to point out future expectations. On the other hand, the results of this paper give only

**88**

**6. Conclusion**

The author declares that there are no conflicting interests.

## **Author details**

Stefan Nikolić 1,2\* and Goran Nikolić 3


\*Address all correspondence to: stefan.nikolic1995@live.com

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **References**

[1] Wang M, Rees SJ, Liao SY. Building an online purchasing behaviour analytical system with neural network. In: Zanasi, Brebbia, Melli, editors. Data Mining III. WIT Press; 2002. ISBN: 1-85312-925-9

[2] Kazem A, Sharifi E, Hussain FK, Saberi M, Hussain OK. Support vector regression with chaos-based firefly algorithm for stock market price forecasting. Applied Soft Computing. 2013;**13**(2):947-958

[3] Kim K-j. Financial time series forecasting using support vector machines. Neurocomputing. 2003;**55**(1-2):307-319

[4] Dobrota M, Poledica A, Bulajić M, Petrović B. Modelling volatility using GARCH model: NASDAQ-100 application. In: XVIII International Conferences on Information and Communication Technologies; YU INFO; Kopaonik (Serbia), February 29 to March 03, 2012. Proceedings. 2012. pp. 18-23. ISBN: 978-86-85525-09-4

[5] Chaudhuri TD, Ghosh I. Artificial neural network and time series modeling based approach to forecasting the exchange rate in a multivariate framework. Journal of Insurance and Financial Management. 2016;**1**(5):92-123

[6] Oancea B, Ciucu ŞC. Time series forecasting using neural networks. 2014. arXiv preprint arXiv:1401.1333

[7] Peter TJ, Somasundaram K. An empirical study on prediction of heart disease using classification data mining techniques. In: IEEE-International Conference on Advances in Engineering, Science and Management (ICAESM-2012). 2012. pp. 514-518

[8] Josе Maria P, Jr M, Barreto GA. Longterm time series prediction with

the NARX network: An empirical evaluation. Neurocomputing. 2008;**71**:3335-3343

[9] Desanka R. Talasići (Wavelets). Akademska Misao; 2015. p. 159. ISBN: 86-7466-190-4

[10] Fusion Media Limited. [Online]. Available: https://www.investing.com [Accessed: 10.09.2018]

[11] Yahoo! Finance. [Online]. Available: https://finance.yahoo.com. [Accessed: 10.09.2018]

[12] Kaastra I, Boyd MS. Forecasting futures trading volume using neural networks. Journal of Futures Markets. 1995;**15**(8):953-970

[13] Fadlalla A, Lin C-H. An analysis of the applications of neural networks in finance. Interfaces. 2001;**31**(4):112-122

[14] Ardalani-Farsa M, Zolfaghari S. Chaotic time series prediction with residual analysis method using hybrid Elman–NARX neural networks. Neurocomputing. 2010;**73**:2540-2553

[15] Izadi MH. Frequency-Based Analysis of Financial Time Series, Chapter I-III. Lausanne: School of Computer and Communication Sciences; 2009. pp. 1-39

[16] Nikolić GS, Cakić M, Cvetković D, editors. Fourier Transforms—High-Tech Application and Current Trends. InTech; 2017. Open Access Book, 11 chapters, 252 pages. ISBN: 978-953-51-2893-9. DOI: 10.5772/62751

[17] Poledica A, Marković D, Živančević S. Logical classification method for bankruptcy prediction. Data science and business intelligence. In: XV International Symposium SymOrg 2016, Zlatibor (Serbia), June 10-13, 2016.

**91**

*Analysis of Financial Time Series in Frequency Domain Using Neural Networks*

*DOI: http://dx.doi.org/10.5772/intechopen.85885*

Symposium Proceedings. 2016. pp. 213-

[18] Milošević P, Poledica A, Rakićevic A, Dobrić V, Petrović B, Radojević D. IBA-based framework for modeling similarity. International Journal of Computational Intelligence Systems.

220. ISBN: 8676803269

2017;**11**(1):206-218

*Analysis of Financial Time Series in Frequency Domain Using Neural Networks DOI: http://dx.doi.org/10.5772/intechopen.85885*

Symposium Proceedings. 2016. pp. 213- 220. ISBN: 8676803269

[18] Milošević P, Poledica A, Rakićevic A, Dobrić V, Petrović B, Radojević D. IBA-based framework for modeling similarity. International Journal of Computational Intelligence Systems. 2017;**11**(1):206-218

**90**

*Fourier Transforms - Century of Digitalization and Increasing Expectations*

the NARX network: An empirical evaluation. Neurocomputing.

[9] Desanka R. Talasići (Wavelets). Akademska Misao; 2015. p. 159. ISBN:

[10] Fusion Media Limited. [Online]. Available: https://www.investing.com

[11] Yahoo! Finance. [Online]. Available: https://finance.yahoo.com. [Accessed:

[12] Kaastra I, Boyd MS. Forecasting futures trading volume using neural networks. Journal of Futures Markets.

[13] Fadlalla A, Lin C-H. An analysis of the applications of neural networks in finance. Interfaces. 2001;**31**(4):112-122

[14] Ardalani-Farsa M, Zolfaghari S. Chaotic time series prediction with residual analysis method using hybrid Elman–NARX neural networks. Neurocomputing. 2010;**73**:2540-2553

[15] Izadi MH. Frequency-Based Analysis of Financial Time Series, Chapter I-III. Lausanne: School of Computer and Communication

[16] Nikolić GS, Cakić M, Cvetković D, editors. Fourier Transforms—High-Tech Application and Current Trends. InTech; 2017. Open Access Book, 11 chapters, 252 pages. ISBN: 978-953-51-2893-9. DOI:

[17] Poledica A, Marković D, Živančević S. Logical classification method for bankruptcy prediction. Data science and business intelligence. In: XV

International Symposium SymOrg 2016, Zlatibor (Serbia), June 10-13, 2016.

Sciences; 2009. pp. 1-39

10.5772/62751

2008;**71**:3335-3343

86-7466-190-4

10.09.2018]

1995;**15**(8):953-970

[Accessed: 10.09.2018]

[1] Wang M, Rees SJ, Liao SY. Building an online purchasing behaviour analytical system with neural network. In: Zanasi, Brebbia, Melli, editors. Data Mining III. WIT Press; 2002. ISBN:

[2] Kazem A, Sharifi E, Hussain FK, Saberi M, Hussain OK. Support vector regression with chaos-based firefly algorithm for stock market price forecasting. Applied Soft Computing.

[3] Kim K-j. Financial time series forecasting using support vector machines. Neurocomputing.

[4] Dobrota M, Poledica A, Bulajić M, Petrović B. Modelling volatility using GARCH model: NASDAQ-100 application. In: XVIII International Conferences on Information and Communication Technologies; YU INFO; Kopaonik (Serbia), February 29 to March 03, 2012. Proceedings. 2012. pp. 18-23. ISBN: 978-86-85525-09-4

[5] Chaudhuri TD, Ghosh I. Artificial neural network and time series

[6] Oancea B, Ciucu ŞC. Time series forecasting using neural networks. 2014.

arXiv preprint arXiv:1401.1333

[7] Peter TJ, Somasundaram K. An empirical study on prediction of heart disease using classification data mining techniques. In: IEEE-International Conference on Advances in

Engineering, Science and Management (ICAESM-2012). 2012. pp. 514-518

[8] Josе Maria P, Jr M, Barreto GA. Long-

term time series prediction with

modeling based approach to forecasting the exchange rate in a multivariate framework. Journal of Insurance and Financial Management. 2016;**1**(5):92-123

1-85312-925-9

**References**

2013;**13**(2):947-958

2003;**55**(1-2):307-319

**93**

**Chapter 6**

**Abstract**

molecular dynamics

**1. Introduction**

Spectroscopy

*Adrien A.P. Chauvet*

Fourier Transform in Ultrafast

Laser technology allows to generate femtoseconds-long pulses of light. These light pulses can be used to learn about the molecules with which they interact. Consequently, pulsed laser spectroscopy has become an important tool for investigating and characterizing electronic and nuclear structure of protein complexes. These spectroscopic techniques can either be performed in the time or frequency domain. Both the time and frequency domain are linked by Fourier Transform (FT) and thus, FT plays a central role in optical spectroscopy. Ultimately, FT is used to explain how light behaves. It is used to explain spectroscopic techniques and enables the development of new techniques. Finally, FT is used to process and analyze data. This chapter thus illustrates the centrality of FT in ultrafast optical spectroscopy.

**Keywords:** Fourier transform, ultrafast spectroscopy, pulsed laser, wave packet,

The theoretical description of light and molecular motion using Fourier Transform (FT) dates back to a century ago, with the development of quantum mechanics and its famous relation to the uncertainty principle [1]. However, it is only since the early 80's that FT found practical applications in molecular spectroscopy thanks to the development of femto-second pulsed lasers, which enabled the pioneering investigations of molecular dynamics in the femto-second regime by Prof. Zewail [2]. Ever since, the development in ultrafast laser systems has been closely followed by the development of new spectroscopic techniques. For example, lasers are now able to generate high harmonics radiations up to the soft X-ray

regime and enables spectroscopies with an atto-second resolution [3].

other. FT is consequently at the heart of ultrafast optical spectroscopy.

The developments in lasers and spectroscopy techniques would however not be feasible without the use of FT. Indeed, time-resolved spectroscopy is the study of spectra (i.e. frequencies) over time. Thus, by linking the time domain to the frequency domain, FT provides the theoretical background to conceptualize the spectroscopic techniques. Furthermore, FT is used to describe short pulses of light as well as molecular motions, and how both, light and molecules, interact with each

Optical spectroscopy is not the only type of spectroscopy that uses FT. The most well-known field that has been transformed using FT is probably that of nuclear magnetic resonance (NMR); where FT considerably reduced the acquisition time and resolution, to the point of rendering non-FT NMR techniques obsolete. Similarly, FT enhances optical spectroscopies by increasing the data acquisition

## **Chapter 6**
