**4. A battery of diagnostic tests for model specifications**

The residuals must be white noise unless the configuration is correctly defined (i.e. the stanch ionic remains must be zero mean and have unit variance). Remember in the above **Table l** that, on the raw series rates and squares, we dismissed the zero hypothesis with no self-correction. This autocorrelation will only exist if and only if the templates are properly defined in the typical waste materials and their square regions. We directly use Ljung-Box Q statistics to reach the adequacy of the model by analyzing the standardized (normalized) residuals (e,/h|<sup>0</sup> 2) and standardized square residuals (s,/h '2). Directly let E and h be estimates of the error and conditional variance.

Kurtosis coefficients are around fifty per cent larger for the two models listed in **Table 6** than for the Gaussian distribution, although the figures showed that the model is acceptable. Second, the model misspecification concerns arise in the framework of above **Table 6** if the coefficient sample autocorrelation and partial autocorrelation (PACs), calculated as 2/(T), is more than double the value of their asymptotic standard error (ASEs) "2 = 0.044, **Table 6** does not have an AC or PAC


#### **Table 6.**

*Standard GARCH and TGARCH models.*

value close to that value. This proposal is fulfilled, as shown in **Table 6**, because the kurtosis is more than twice that for the non-standardized residual. Fourth, in a proprietary unpublished paper reported by Keam and Pagan, we are using the Pagan and Sabau [36] specification test. In particular, Kearn and Pagan are proposing to square residue out of the mean equation and regress to perform their tests with a constant and conditional variation (h2) in the following ways: (24) Alles and Murray ([37], p. 140) included this test in "a diagnostic test":

$$\mathbf{s} \mathbf{\!\!\!\!\!-c} + \mathbf{b} \mathbf{h} + \mathbf{e} \tag{9}$$

For the standard GARCH model the results are (computed as T• R2) 0.109 (p = 0.74) and (T• R2) 0.043 (p = 0.834) for the TGARCH model. These trivial p- values thus suggest a dismissal of the null hypothesis that the results maintain ARCH impact. In summary, our diagnostic test battery overall indicates that models

*An Econometric Investigation of Market Volatility and Efficiency: A Study of Small…*

This chapter addresses the value of high stock market fluctuations and three predictions: economists, investors, and policy makers. The fact that uncertainty is an important phenomenon to these institutions is illustrated by quotes from current literature in financial economy. While much analytical attention has been paid to the volatility of large cap inventory indices, there is been little concern for the volatility of the small cap indices. At least three methodological problems to be explored using small caps (SC) 600 for analysis purposes are described in this

The primary focus of the chapter is on these testable theories. Hypothesis 1 is a validation of the statement that SC 600 variance cannot be expected. This theory has been refuted on the basis of evidence that low cap volatility of 600 can be forecasted in the same way as other stock prices are expected by regular GARCH and TGARCH models. Hypothesis 2 is a hypothesis to the extent that SC 600 is not similarly empirically compatible with other stock values. The findings demonstrate, in terms of observable methodological regularities that govern the empiric distribution of stock prices in general, that the SC 600 exhibits the same statistical

In conclusion, hypothesis 3 tests the argument that SC 600 cannot pass a rigorous market efficiency test for the form. This hypothesis is dismissed, which indicates that SC 600 has passed the Effective Hypothesis Test (EMH). Our findings may be seen as the start of further research on the behavior, particularly with respect to the EMH measure, of other small equity indices. Our findings especially encourage further research into a closer empirical study of the unresolved myth in

are stated correctly.

*DOI: http://dx.doi.org/10.5772/intechopen.94119*

**5. Conclusions**

article.

characteristics.

investor perceptions.

**201**

In fact, model (9) investigates how many variances can be clarified by situation variances in the unknown actual volatility (proxied by e). As the regressed as well as the regressor are at least theoretically the same in the ARCH model's framework, the equation slope (9) should ideally be equal to unity, with zero intercept. Then you can determine the fit of the model using R2. **Table 7** reports the results.

The findings of the Keam-Pagan (K-P) check in **Table 7** prove that the evidence supports the theoretical assumptions. First, the intercepts (called C) vary little to zero. Secondly, there are extremely positive and high slope coefficients. Third, with standard errors insignificantly different from zero, both coefficients are statistically significant in less than five percent. Fourthly, it is important to remember that the TGARCH is greater in R<sup>2</sup> than the regular GARCH, and the model's explanatory forces are R2-based. It should not come as a surprise that TGARCH should be able to collect asymmetries from the data better than the standard GARCH does. This is an indirect proof of the overall asymptotic superior success of Glosten et al. in the recording results gap for both models (1993) as Engle and Ng [34] models of capture of asymmetries in volatility. In the same way, the results discrepancies give subtle proof that the traditional GARCH model struggles to chart the data's asymmetries.

Finally, Diebold [38] suggests, among other things, that, if the GARCH model is defined correctly, no ARCH effects in mean and variance equations respectively in the uniform residual rates and squares will remain. This test is a Lagrange multiplier test asymptotically equivalent to T \* R2, where T is the sample number, and R2 the known determination coefficient. This test is also a K-degree free chi-square test.


**Table 7.** *Keam-Pagan (K-P) test.*
