**5. Basic assumptions testing**

Consider a series *R(i).* Let this series is one "spatial" realization of random process *y = R(d).*  For analysis of this series it is necessary to know if some basic assumptions about behavior of underlying random process can be accepted or not. These basic assumptions are (Maisel, 1971):


In fact the realizations of random process are *Rj(i)*, where index *j* correspond to individual realizations and index *i* corresponds to the distance *di.* In the case of ensemble samples there are values *Rj(i)* for *i = const.* and *j = 1. M* at disposal.

For these data there is no problem to use standard statistical analysis of univariate samples for creation of data distribution e.g. probability density function *p(R*(i)*)* or computation of statistical characteristics as mean value *E(R*(i)*)* or variance *D(R*(i)*)*. In majority of applications, the ensemble samples are not available and statistical analysis is based on the one spatial realization *Rj(i)* for j = 1 and i = 1.N. For creation of data distribution and computation of moments, some additional assumptions are necessary.

The basic assumption is stationarity. The random process is **strictly stationary** if all the statistical characteristics and distributions are independent on ensemble location. The **wide sense stationarity** of *g-th* order implies independence of first *g* moments on ensemble location.

The second order stationarity implies that:


For example the covariance is ( ( ) ( )) ( ) *i ih cRd Rd ch* . For ergodic process the "ensemble" mean can be replaced by the average across distance (from one spatial realization) and autocorrelation *R(h) =0* for all sufficiently high *h*.

Ergodicity is very important, as the statistical characteristics can be calculated from one single series *R(i)* instead of ensembles which frequently are difficult to be obtained. Given a *R(i)* series, the selection of the appropriate approach for its analysis is not a trivial task because the mathematical background of the underlying process is unknown. Moreover, the R(i) are corrupted by noise and consist of finite number of sample values. The task to analyze real data is often to resolve the so-called inverse problem, i.e., given a series *R(i)*, how to discover the characteristics of the underlying process. Three approaches are mainly applied:


In reality the multiperiodic components are often mixed with random noise.

Consider a series *R(i).* Let this series is one "spatial" realization of random process *y = R(d).*  For analysis of this series it is necessary to know if some basic assumptions about behavior of underlying random process can be accepted or not. These basic assumptions are (Maisel,

In fact the realizations of random process are *Rj(i)*, where index *j* correspond to individual realizations and index *i* corresponds to the distance *di.* In the case of ensemble samples there

For these data there is no problem to use standard statistical analysis of univariate samples for creation of data distribution e.g. probability density function *p(R*(i)*)* or computation of statistical characteristics as mean value *E(R*(i)*)* or variance *D(R*(i)*)*. In majority of applications, the ensemble samples are not available and statistical analysis is based on the one spatial realization *Rj(i)* for j = 1 and i = 1.N. For creation of data distribution and computation of

The basic assumption is stationarity. The random process is **strictly stationary** if all the statistical characteristics and distributions are independent on ensemble location. The **wide sense stationarity** of *g-th* order implies independence of first *g* moments on ensemble


For example the covariance is ( ( ) ( )) ( ) *i ih cRd Rd ch* . For ergodic process the "ensemble" mean can be replaced by the average across distance (from one spatial realization) and

Ergodicity is very important, as the statistical characteristics can be calculated from one single series *R(i)* instead of ensembles which frequently are difficult to be obtained. Given a *R(i)* series, the selection of the appropriate approach for its analysis is not a trivial task because the mathematical background of the underlying process is unknown. Moreover, the R(i) are corrupted by noise and consist of finite number of sample values. The task to analyze real data is often to resolve the so-called inverse problem, i.e., given a series *R(i)*, how to discover the characteristics of the underlying process. Three approaches are mainly applied:


dependent on the locations but on the lag *i j hdd* only.

second based on the self affine processes with multiscale nature,

In reality the multiperiodic components are often mixed with random noise.

**5. Basic assumptions testing** 

are values *Rj(i)* for *i = const.* and *j = 1. M* at disposal.

moments, some additional assumptions are necessary.

The second order stationarity implies that:

autocorrelation *R(h) =0* for all sufficiently high *h*.

first based on random stationary processes,

third based on the theory of chaotic dynamics.

1971):

location.

 stationarity ergodicity independence Before choosing the approach, some preliminary analysis is needed mainly to test the stationarity and linearity. This is important as some kind of stochastic (self affine) processes with power-law shape of their spectrum may erroneously be classified as chaotic processes on the basis of some properties of their non-linear characteristics, e.g., correlation dimension and Kolmogorov entropy. In this sense, the tests for stationarity and linearity may be regarded as a necessary preprocessing in order to choose an appropriate approach for further analysis. Prior to selecting any method for data analysis, some simple tests are useful to apply on the series *R(i)*. The first one may be to observe the *R(i)* distribution e.g. via histogram as simple estimator of probability density function (pdf) or by using kernel density estimator (Meloun Militký, 2011). The histogram of series *R(i)* corresponding to the raw SHV trace of twill weave fabric (shown in the fig. 4) is shown in fig. 14.

Fig. 14. Histogram and pdfs of raw SHV for twill fabric

In this figure, the solid line corresponds to the Gaussian pdf with parameters: mean = 0.000524 and standard deviation = 0.0358. The dotted line is nonparametric kernel density estimator wit optimal bandwidth h= 0.0243. The bimodality pattern is clearly visible.

In most of the methods for data processing based on stochastic models, normal distribution is assumed. If the distribution is proved to be non-normal (according to some test or inspection), there are three possibilities:


It is suitable to construct the histograms for the four quarters of data separately and inspect non-normality or asymmetry of distribution. The statistical characteristics (mean and variances) of these sub series can support wide sense stationarity assumption (when their values are statistically indistinguishable).

Woven Fabrics Surface Quantification 133

Cumulative periodogram

0 0.1 0.2 0.3 0.4 0.5

(5)

(6)

*<sup>v</sup>* (7)

rel. frequency [-]

roughness aggregation is tool for smoothing of roughness profiles and avoiding local (small scale) roughness. The principle of aggregation is joining of original data *R(i)* into non overlapping blocks or application of window of length *L*. By using of aggregation the resolution is decreased and roughness profile is created without local roughness variation. By averaging of original data *Ri* = *R(i)* in non overlapping blocks having L values the aggregated series are constructed. Aggregated series ( )( ) *<sup>L</sup> R i* are created according to

For rough SHV from fig. 4, aggregate series for aggregation length *L* =2 and 10 are shown in

It is known (Beran, 1984; Cox, 1984) that variance of aggregated series v(L) is connected with

2

Here *c(h)* is autocorrelation function defined as *ch Ri Ri h* ( ) cov( ( ) \* ( )) and lag \* *<sup>i</sup> h Ld* .

(2\* ) ( ) ( ) (1) 2 1 *L*

1

*s h*

*L v*

1 1 <sup>2</sup> ( ) *L s*

Fig. 15. Cumulative periodogram of raw SHV for twill fabric It is visible that the raw SHV is approximately periodic.

( ) <sup>1</sup> ( ) ( ( 1) .. ( )) 1, 2, 3.. *<sup>L</sup> R i RiL L RiL L*

( )

Very important is lag one autocorrelation function for aggregated series

*L*

*r*

*L*

*v v c h L L*

*L*

auto correlation structure of original series


relation

fig. 16.

0

0.2

0.4

0.6

cumul. periodogram [-]

0.8

1

1.2

The simple nonparametric test of stationarity uses the reverse arrangement evaluation. Test is based on the computation of times that R(i) >R(j) with i < j for all i. If the sequence of *R(i)* are independent identically distributed (i.i.d) random variables, the number of reverse arrangements *NR* is random variable with mean *E(NR) = N(N-1)/4* and variance *D(NR) = N(2N+5) (N+1)/72*. If observed number *NR* is significantly different from *E(NR)*, the nonstationarity (trend) is indicated. For rough SHV from fig. 4 reversation test statistic NT = 2.328 and upper limit for P=95%, is 1.96 only. The stationarity is therefore not acceptable.

The alternative "run test" can detect a monotonic trend in a series *R(i)* i = *1..N*. A "run" is defined as a sequence of identical observations that is followed or preceded by a different observation or no observation at all. First the median *med (R)* of the observations *R(i)* is evaluated and the new series z(i) is derived from *R(i)* as

$$\text{z (i)} = 0 \text{ if } R\_{(i)} \le \text{med (R)}$$

$$\text{z (i)} = 1 \text{ if } R\_{(i)} \succeq \text{med (R)}$$

Then the member of runs in z(i) is computed. If *R(i)* is stationary random process, the number of runs *NT* is a random variable with mean *E(NT) = N/2 + 1* and variance *D(NT) = (N(N – 2)) /(4(N-1))*. As observed number of runs *NT* is significantly different from *E(NT)*. It indicates nonstationarity because of the possible trend. For rough SHV from fig. 4, *NR=* 18.14 and upper limit for P=95% is 1.96 only. The stationarity is here not acceptable.

Very simple check of presence of first order autocorrelation is creation of zero order variability diagram which is plot of *R(i+1)* on *R(i).* In the case of independence the random cloud of points appears on this graph. Autocorrelation of first order is indicated by linear trend.

For characterization of independence hypothesis against periodicity alternative the cumulative periodogram can be constructed. Cumulative periodogram is unbiased estimate of the integrated spectrum

$$\text{CLU}(f\_i) = \frac{\sum\_{j=1}^{i} I(f\_j)}{N \ s^2} \tag{4}$$

The function *C(fi)* is called the normalized cumulative periodogram (construction of *I(fi)* is described in par. 7). For white noise series (i.i.d. normally distributed data), the plot of *C(fi)* against *fi* would be scattered about a straight line joining the points (0, 0) and (0.5, 1). Periodicities would tend to produce a series of neighboring values of *I(fi)* which were large. The result of periodicities therefore bumps on the expected line. The limit lines for 95% confidence interval of *C(fi)* are drawn at distances 1.36 / ( 2) / 2 *N* . For rough SHV from fig. 4 cumulative periodogram is shown in fig. 15.

#### **6. Aggregation principle**

In the unevenness analysis, it is common to aggregate raw data. This is equivalent to cutting the material to pieces and measurement of variability between pieces only. In the case of

The simple nonparametric test of stationarity uses the reverse arrangement evaluation. Test is based on the computation of times that R(i) >R(j) with i < j for all i. If the sequence of *R(i)* are independent identically distributed (i.i.d) random variables, the number of reverse arrangements *NR* is random variable with mean *E(NR) = N(N-1)/4* and variance *D(NR) = N(2N+5) (N+1)/72*. If observed number *NR* is significantly different from *E(NR)*, the nonstationarity (trend) is indicated. For rough SHV from fig. 4 reversation test statistic NT = 2.328 and upper limit for P=95%, is 1.96 only. The stationarity is therefore not acceptable.

The alternative "run test" can detect a monotonic trend in a series *R(i)* i = *1..N*. A "run" is defined as a sequence of identical observations that is followed or preceded by a different observation or no observation at all. First the median *med (R)* of the observations *R(i)* is

z (i) = 0 if *R(i)* < *med (R)* 

z (i) = 1 if *R(i)* ≥ *med (R)* Then the member of runs in z(i) is computed. If *R(i)* is stationary random process, the number of runs *NT* is a random variable with mean *E(NT) = N/2 + 1* and variance *D(NT) = (N(N – 2)) /(4(N-1))*. As observed number of runs *NT* is significantly different from *E(NT)*. It indicates nonstationarity because of the possible trend. For rough SHV from fig. 4, *NR=*

Very simple check of presence of first order autocorrelation is creation of zero order variability diagram which is plot of *R(i+1)* on *R(i).* In the case of independence the random cloud of points appears on this graph. Autocorrelation of first order is indicated by linear

For characterization of independence hypothesis against periodicity alternative the cumulative periodogram can be constructed. Cumulative periodogram is unbiased estimate

> 1 2

*j i*

The function *C(fi)* is called the normalized cumulative periodogram (construction of *I(fi)* is described in par. 7). For white noise series (i.i.d. normally distributed data), the plot of *C(fi)* against *fi* would be scattered about a straight line joining the points (0, 0) and (0.5, 1). Periodicities would tend to produce a series of neighboring values of *I(fi)* which were large. The result of periodicities therefore bumps on the expected line. The limit lines for 95% confidence interval of *C(fi)* are drawn at distances 1.36 / ( 2) / 2 *N* . For rough SHV from

In the unevenness analysis, it is common to aggregate raw data. This is equivalent to cutting the material to pieces and measurement of variability between pieces only. In the case of

*i*

( )

*CU f N s* 

( )

*I f*

*j*

(4)

18.14 and upper limit for P=95% is 1.96 only. The stationarity is here not acceptable.

evaluated and the new series z(i) is derived from *R(i)* as

trend.

of the integrated spectrum

**6. Aggregation principle** 

fig. 4 cumulative periodogram is shown in fig. 15.

Fig. 15. Cumulative periodogram of raw SHV for twill fabric It is visible that the raw SHV is approximately periodic.

roughness aggregation is tool for smoothing of roughness profiles and avoiding local (small scale) roughness. The principle of aggregation is joining of original data *R(i)* into non overlapping blocks or application of window of length *L*. By using of aggregation the resolution is decreased and roughness profile is created without local roughness variation. By averaging of original data *Ri* = *R(i)* in non overlapping blocks having L values the aggregated series are constructed. Aggregated series ( )( ) *<sup>L</sup> R i* are created according to relation

$$R^{(L)}(i) = \frac{1}{L} (R(i \ L - L + 1) + \dots + R(i \ L)) \quad L = 1, \ 2, \ 3 \dots \tag{5}$$

For rough SHV from fig. 4, aggregate series for aggregation length *L* =2 and 10 are shown in fig. 16.

It is known (Beran, 1984; Cox, 1984) that variance of aggregated series v(L) is connected with auto correlation structure of original series

$$\left(\upsilon^{(L)}\right) = \frac{\upsilon}{L} + \frac{2}{L^2} \sum\_{s=1}^{L-1} \sum\_{h=1}^{s} c(h) \tag{6}$$

Here *c(h)* is autocorrelation function defined as *ch Ri Ri h* ( ) cov( ( ) \* ( )) and lag \* *<sup>i</sup> h Ld* . Very important is lag one autocorrelation function for aggregated series

$$r^{\langle L\rangle}(1) = 2\frac{v^{\langle 2^\*L\rangle}}{v^{\langle L\rangle}} - 1 \tag{7}$$

Woven Fabrics Surface Quantification 135

(9)

is frequently used.

.

 

( ) (1 ) (2 ) 2 *<sup>v</sup> c h <sup>h</sup>*

be used. Instead of


If 1 () 0 *c h h for* 

Fig. 17. Dependence of *log* ( ) *<sup>L</sup> v* on *L* for twill fabric

dimensions occur for antipersistent processes with *H Є (0, 0.5)*.

*l*

The ( ) *<sup>L</sup> Ml* asymptotically behaves like power function ( ) *<sup>L</sup> l H*( 1) *M L <sup>l</sup>*

log variance (L)

For the higher L the correlation structure remains the same and assumption of i.i.d. cannot

the so-called Hurst exponent 1 0.5 \* *H*

Where H = 0, this denotes a series of extreme irregularity and *H* = 1 denotes a smooth series.

For rough SHV from fig. 4 dependence of *log* ( ) *<sup>L</sup> v* on aggregation length *L* shown in fig. 17.

Variance vs L

0 1 2 3 4 5

log L [-]

, the process has called fractal dimension 2 / 2 *FD*

<sup>1</sup> ( () ) /

(10)

. If the series has finite

Generally, the *l*-th central moment of aggregated long range dependent series is defined as

*N L L L l*

1

variance and no long-range dependence, then *H = 0.5* and the slope of the fitted line in loglog plot of ( ) *<sup>L</sup> Ml* on L should be *– l/2*. It is assumed that both *N* and *N/L* are large. This ensures that both the length of each block and number of blocks is large. In practice the points at very low and high ends of the plot are not used for fitting least squares line. Indeed, short-range effects can distort the estimates of *H* if the low end of plot is used.

/ ( ) ( )

*k <sup>M</sup> abs <sup>y</sup> <sup>k</sup> <sup>y</sup> N L*

It is clear that for higher L this dependence is scattered and corresponding slope is over 1. The long range dependency is characteristic for self affine processes as well. Self similar processes are characterized by the fractal dimension *FD*. For self-affine processes, the local properties are reflected in the global ones, resulting in the well known relationship *H + FD = 2.* Long-memory dependence, or persistence, is associated with the case *H Є (0.5 ,1)* and linked to smooth curves with low fractal dimensions. Rougher curves with higher fractal

Fig. 16. Aggregate series ( L =2 and 10) for twill fabric

The nature of original random series can be explained by using of characteristics of aggregated series. There are three main groups of series:


For short-range dependent stationary processes, the first order autocorrelation ( )(1) 0 for *<sup>L</sup> r L* . The same is valid for autocorrelation of all lags h. The aggregated series ( )( ) *<sup>L</sup> R i* therefore tends to the second order pure noise as *L* . For large L variance ( ) / *<sup>L</sup> v vL* . The autocorrelation structure of aggregated series is decreased until limit of no correlation. Typical model of short-range processes are autoregressive moving average processes of finite order. For the higher *L,* data are approaching to the i.i.d. case.

For long-range dependent processes, variance ( ) as *<sup>L</sup> Lv L* .

Then the autocorrelation structure is not vanishing. For these processes, it is valid that for sufficiently large L

$$
\omega(h) \approx h^{-\beta} \text{ and } \upsilon^{\{L\}} \approx L^{-\beta} \tag{8}
$$

where 0 1 is valid for stationary series. For non-stationary case can be outside of this interval. For the long-range processes correlation structure is identical for original and aggregate series. For strictly second order self-similar processes,

SHV L= 2 and 10

2 mean 10 mean

0 100 200 300 400 500

index i

The nature of original random series can be explained by using of characteristics of

1. Series of random independent identically distributed (i.i.d.) variables. For this case are all *c(h) =0*, for lags h = 1,2,.. and data are uncorrelated. This is ideal case for roughness analysis and it is implicitly assumed as valid in computation of basic geometric

For short-range dependent stationary processes, the first order autocorrelation ( )(1) 0 for *<sup>L</sup> r L* . The same is valid for autocorrelation of all lags h. The aggregated series ( )( ) *<sup>L</sup> R i* therefore tends to the second order pure noise as *L* . For large L variance ( ) / *<sup>L</sup> v vL* . The autocorrelation structure of aggregated series is decreased until limit of no correlation. Typical model of short-range processes are autoregressive moving average

Then the autocorrelation structure is not vanishing. For these processes, it is valid that for

( ) ( ) and *<sup>L</sup> ch h v L* 

this interval. For the long-range processes correlation structure is identical for original and

is valid for stationary series. For non-stationary case

(8)

can be outside of

2. The short-range dependent stationary processes. In this case the sum of all *c(h)*

3. The long-range dependent stationary processes. In this case the sum of all *c(h)*

processes of finite order. For the higher *L,* data are approaching to the i.i.d. case.

For long-range dependent processes, variance ( ) as *<sup>L</sup> Lv L* .

aggregate series. For strictly second order self-similar processes,


characteristics.

sufficiently large L

where 0 1 

h= 1, 2, … is convergent

h= 1, 2, … is divergent

Fig. 16. Aggregate series ( L =2 and 10) for twill fabric

aggregated series. There are three main groups of series:


0

SHV

0.05

0.1

$$
\mathcal{L}(h) \approx \frac{v}{2} (1 - \beta) \left(2 - \beta\right) h^{-\beta} \tag{9}
$$

For the higher L the correlation structure remains the same and assumption of i.i.d. cannot be used. Instead of the so-called Hurst exponent 1 0.5 \* *H* is frequently used. Where H = 0, this denotes a series of extreme irregularity and *H* = 1 denotes a smooth series.

For rough SHV from fig. 4 dependence of *log* ( ) *<sup>L</sup> v* on aggregation length *L* shown in fig. 17.

Fig. 17. Dependence of *log* ( ) *<sup>L</sup> v* on *L* for twill fabric

It is clear that for higher L this dependence is scattered and corresponding slope is over 1.

The long range dependency is characteristic for self affine processes as well. Self similar processes are characterized by the fractal dimension *FD*. For self-affine processes, the local properties are reflected in the global ones, resulting in the well known relationship *H + FD = 2.* Long-memory dependence, or persistence, is associated with the case *H Є (0.5 ,1)* and linked to smooth curves with low fractal dimensions. Rougher curves with higher fractal dimensions occur for antipersistent processes with *H Є (0, 0.5)*.

If 1 () 0 *c h h for* , the process has called fractal dimension 2 / 2 *FD* . Generally, the *l*-th central moment of aggregated long range dependent series is defined as

$$\mathcal{M}\_{l}^{(L)} = \frac{1}{N \;/\ L} \sum\_{k=1}^{N/L} abs(y^{(L)}(k) - \overline{y})^l \tag{10}$$

The ( ) *<sup>L</sup> Ml* asymptotically behaves like power function ( ) *<sup>L</sup> l H*( 1) *M L <sup>l</sup>* . If the series has finite variance and no long-range dependence, then *H = 0.5* and the slope of the fitted line in loglog plot of ( ) *<sup>L</sup> Ml* on L should be *– l/2*. It is assumed that both *N* and *N/L* are large. This ensures that both the length of each block and number of blocks is large. In practice the points at very low and high ends of the plot are not used for fitting least squares line. Indeed, short-range effects can distort the estimates of *H* if the low end of plot is used.

Woven Fabrics Surface Quantification 137

Because the basic output form RCM is set of "slices" (roughness profiles in the cross direction at selected position in machine direction) it is possible to compute all profile roughness characteristics separately for each slice and show the differences between slices. Another possibility is to use the reconstructed surface roughness plane for evaluation of

There are two reasons for measuring surface roughness. First, is to control manufacture and is to help to ensure that the products perform well. In the textile branch the former is the case of special finishing (e.g. pressing or ironing) but the later is connected with comfort,

From a general point of view, the rough surface display process which have two basic

 Random aspect: the rough surface can vary considerably in space in a random manner, and subsequently there is no spatial function being able to describe the geometrical

 Structural aspect: the variances of roughness are dependent with respect to their spatial positions and their correlation depends on the distance. Especially surface of textile weaves is characterized by nearly repeating patterns and therefore some periodicities

The random part of roughness can be suppressed by proper smoothing. In this case the only

From the individual roughness profiles, it is possible to evaluate a lot of roughness parameters. Classical roughness parameters are based on the set of points *R(dj ) j =1.. N*  (SHV) defined in the sample length interval *Ls*. The distances *dj* are obviously selected as equidistant and then *R(dj)* can be replaced by the variable *Rj* . For identification of positions in length scale, it is sufficient to know that sampling distance *ds = dj - dj-1 = Ls/N* for *j>1.*The

i. Mean Absolute Deviation *MAD.* This parameter is equal to the mean absolute difference of surface heights from average value *(Ra)*. For a surface profile this is given by,

1

This parameter is often useful for quality control and textiles roughness characterization (called SMD (Kawabata, 1980)). However, it does not distinguish between profiles of different shapes. Its properties are known for the case when *Rj's* are independent identically distributed (i. i. d.) random variables. For rough SHV from fig. 4, dependence of SMD on

ii. Standard Deviation (Root Mean Square) Value *SD*. This characteristics is given by

*j j MAD R R*

<sup>1</sup> <sup>2</sup> ( ) *<sup>j</sup> j SD R R*

*<sup>N</sup>* (15)

*<sup>N</sup>* (16)

standard roughness parameters used frequently in practice are (Anonym, 1997):

**7. Classical roughness characteristics** 

planar roughness.

appearance and hand.

geometrical features:

are often identified.

structural part will be evaluated.

aggregation length L is shown in fig. 19.

form,

One of the best methods for evaluation of or H is based on the power spectral density

$$\log(o) = \frac{1}{2\pi} \int\_{h=-\infty}^{\infty} c(h) \exp(-i \text{ } h \text{ } o) \, d o \quad \text{-}\pi \lhd o \lhd \pi \tag{11}$$

For small frequency range, it is valid that

$$\mathcal{S}(\mathcal{o}\boldsymbol{\sigma}) \approx \boldsymbol{o}^{-(1-\beta)} \quad \boldsymbol{o} \to \boldsymbol{0} \tag{12}$$

and for very high frequency range

$$g(\rho) \approx \rho^{-1-a} \quad \rho \to \infty \tag{13}$$

The parameters and or are evaluated from empirical linear representation of dependence of the log of power spectral density (PSD) on log frequency in suitable range. The parameter is often evaluated from empirical representation of the log of power spectral density

$$\log(\mathcal{g}(\alpha)) = -(1 - \beta)\log(\alpha) + a\_0 + a\_1 \cdot \alpha + \dots + a\_p \cdot \alpha^p \tag{14}$$

For long range processes, it is ideal to have all aj = 0, except a0.

For rough SHV from fig. 4 dependence of log( ( )) *g* on log frequency is shown in fig. 18.

Fig. 18. Dependence of log( ( )) *g* on log frequency for twill fabric

It is visible that the scatter of data is very big. The solid line in fig. 18 is regression line created for low frequency range data set. The slope is equal to - 0. 2831. Corresponding *=* 0.7169 and Hurst exponent is 0.6416.

<sup>1</sup> ( ) ( ) exp( ) - < < <sup>2</sup> *<sup>h</sup>*

(1 ) *g*() 0 

dependence of the log of power spectral density (PSD) on log frequency in suitable range.

0 1 log( ( )) (1 ) log( ) .. *<sup>p</sup> <sup>p</sup> g*

 

Log power spectral density plot


log frequency

It is visible that the scatter of data is very big. The solid line in fig. 18 is regression line created for low frequency range data set. The slope is equal to - 0. 2831. Corresponding

on log frequency for twill fabric

 

 

 

or H is based on the power spectral density

(12)

(13)

 *aa a* (14)

on log frequency is shown in fig. 18.

*=*

(11)

or are evaluated from empirical linear representation of

is often evaluated from empirical representation of the log of power

 

*ch ih d*

<sup>1</sup> *g*()

One of the best methods for evaluation of

For small frequency range, it is valid that

and for very high frequency range

and

The parameters

The parameter

spectral density


Fig. 18. Dependence of log( ( )) *g*

0.7169 and Hurst exponent is 0.6416.




log PSD





*g* 

For rough SHV from fig. 4 dependence of log( ( )) *g*

For long range processes, it is ideal to have all aj = 0, except a0.
