*4.2.1. Content validity*

Content validity, sometimes called face validity, measures the extent to which a survey's content logically appears to reflect what was intended to be measured [43]. It typically involves a systematic review of the survey's contents to ensure that it includes everything it should and nothing that it should not [46]. Although there does not yet exist a scientific measure of the content validity for a survey instrument [46], content validity is often assessed practically by approaches such as focus groups, and/or pilot test studies [41, 46, 47].

In this research, content validity was assessed subjectively but systematically to establish the appropriateness of the variables used—items not considered appropriate were rejected by two focus group studies (one for each in Australia and China) and 20 pilot tests (10 for each in Australia and China).

#### *4.2.2. Criterion validity*

**4. Data analysis**

44 Management of Information Systems

**4.1. Reliability analysis**

adopted within the factor analysis.

that exist among a set of items [40, 42, 45, 46].

have very good reliability.

**4.2. Validity analysis**

*4.2.1. Content validity*

During the data analysis procedures, this research conducted an initial reliability analysis and validity analysis first. Principal component analysis with varimax rotation method was then

Reliability is the extent to which a question yields the same responses over time, or a scale produces consistent results when repeated measurements are made [36, 40]. Any summated scale should be first analysed for reliability to ensure its appropriateness before proceeding to an assessment of its validity [41]. In this research, reliability was assessed using internal consistency analysis [36]. The earliest and simplest measure of the internal consistency of a set of data items is the split-half reliability of the scale [40, 41]. On assessing split-half reliability, the total set of items is divided into two equivalent halves—if the two scales correlate highly, they should produce similar scores [42, 43]. The total scores for the two halves are then cor-

In practice, the approach for assessing internal consistency is the coefficient alpha (a.k.a., 'the reliability coefficient') or Cronbach's alpha popularised in a 1951 article by Cronbach based on the work in the 1940s by Guttman and others and is the most common measure of internal consistency of items in a scale [32, 36, 42, 44, 45], which is the most commonly applied estimate of a survey's reliability [40, 43]. It provides a summary measure of the inter-correlations

At the initial reliability analysis stage, Cronbach's alpha should be considered as the critical characteristic. Cronbach's alpha varies from 0 to 1 [40, 46]. The higher the correlations among the items is the greater the Cronbach's alpha values which imply that high scores on one question are associated with high scores on other questions [36]. If the value of Cronbach's alpha is low and if the item pool is sufficiently large, this suggests that some items do not equally share in the common core and should be eliminated [42]. Research also indicates that a value is considered to have very good reliability between 0.80 and 0.95, good reliability between 0.70 and 0.80, fair reliability between 0.60 and 0.70 and poor reliability below 0.60 [43].

In the test of reliability in this research, Cronbach's alpha is 0.837. The results showed strong evidence of meeting the reliability standards of exploratory research and are considered to

The validity of a survey instrument may be defined as the extent to which it accurately measures what it is supposed to measure [36, 40]. There are three basic approaches to establishing

Content validity, sometimes called face validity, measures the extent to which a survey's content logically appears to reflect what was intended to be measured [43]. It typically involves a

validity namely content validity, criterion validity, and construct validity [36, 40, 43].

related, and this is taken as the measure of the reliability of the survey [42, 43].

Criterion validity addresses the ability of a measure to correlate with other standard measures of similar established criteria [43]. Criterion validity may be classified as either concurrent or predictive depending on the time sequence in which the new measurement scale and the criterion measure are correlated [40, 43] as follows:


However, no method for assessing criterion validity is foolproof while none can conclusively show if a concept is truly measuring what it should [36]. As the concern is more about the validity of the use of the survey instrument than its own inherent validity [36], most researchers appear to more commonly use construct validity as discussed below. In this research, criterion validity analysis was not conducted, as no similar established surveys were available with which to compare it, and the measure is not being used to predict a future event.

#### *4.2.3. Construct validity*

Construct validity addresses the question of what constructor characteristic the survey is measuring and how an instrument 'behaves' when it is used [40, 46] as follows:


In this research, convergent validity was being assessed actually through reliability analysis. Technically, discriminant validity was being indirectly established through the following factor analysis. Nomological validity was not analysed as no similar established relationships appeared to exist in the literature.

### **4.3. Factor analysis**

The basic rationale of factor analysis is that the variables are correlated because they share one or more common components, and if they did not correlate, there would be no need to perform factor analysis, which operates on the correlation matrix of the variables to be factored [36]. For the ease of interpretation, principal component analysis with varimax rotation method was conducted within the factor analysis. In practice, principal component analysis can be conducted using SPSS software through factor analysis.

**Table 2** shows that the criteria for evaluating a matrix of the factor loading for each variable onto each component are quite good that the 15 items are grouped into five components with the suppression of loadings not less than 0.4. Items with factor loadings less than 0.4 have not been displayed for clarity [48].


Extraction Method: Principal Component Analysis.

Rotation Method: Varimax with Kaiser Normalization.

a. Rotation converged in 7 iterations.

**Table 2.** Rotated component matrix.


**Table 3.** Total variance explained.

In this research, convergent validity was being assessed actually through reliability analysis. Technically, discriminant validity was being indirectly established through the following factor analysis. Nomological validity was not analysed as no similar established relationships

The basic rationale of factor analysis is that the variables are correlated because they share one or more common components, and if they did not correlate, there would be no need to perform factor analysis, which operates on the correlation matrix of the variables to be factored [36]. For the ease of interpretation, principal component analysis with varimax rotation method was conducted within the factor analysis. In practice, principal component analysis

**Table 2** shows that the criteria for evaluating a matrix of the factor loading for each variable onto each component are quite good that the 15 items are grouped into five components with the suppression of loadings not less than 0.4. Items with factor loadings less than 0.4 have not

**Factors Components Cronbach's Alpha Items**

F15 0.792 0.743 Item 13 F14 0.788 Item 14 F13 0.67 0.492 Item 12 F12 0.818 0.661 Item 9 F11 0.705 Item 8 F10 0.461 Item 10 F9 0.771 0.696 Item 15 F8 0.738 Item 11 F7 0.522 0.43 Item 6 F6 0.439 Item 7 F5 0.805 0.578 Item 5 F4 0.753 Item 4 F3 0.785 0.662 Item 1 F2 0.782 Item 2 F1 0.606 Item 3

can be conducted using SPSS software through factor analysis.

**1 2 3 4 5**

appeared to exist in the literature.

been displayed for clarity [48].

Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.

a. Rotation converged in 7 iterations.

**Table 2.** Rotated component matrix.

**4.3. Factor analysis**

46 Management of Information Systems

Any components with a variance (represented by the eigenvalue) less than 1.0 were rejected as they contribute less than other factors to the model [36]. **Table 3** shows that all eigenvalues are over 1.0 of component 1, component 2, component 3, component 4, and component 5. According to research, the components accepted should account for at least 60% of the cumulative variance [36, 40]. In this research, cumulative percentage of the variance for the four components accounts for 64.843% which satisfies the normally accepted measure (see **Table 3**).
