**5. Methodology**

Megan Oakleaf (2008, pp. 234-240) explains that instructors involved in information literacy instruction use "fixed-choice" tests and performance assessment (a.k.a. authentic assessment) most often to measure student learning in their courses, including distance learning courses. She identifies traditional models and behavioral theories of learning and educational measurement that serve as the theoretical basis for fixed-choice tests and relates them to early 20th century principles of scientific measurement. Fixed-choice tests include multiple-choice, matching, and true/false questions. They are conducive to quantitative assessment. On the other hand, performance or authentic assessment is based on constructivist educational theories which posit the idea that knowledge is created or "constructed" by individuals rather than passed on fully-formed from teacher to student. Learning, according to this theory, takes place through engagement and interaction in the real world, problem solving, critical thinking, and knowledge creation. Performance

The information literate student, individually or as a member of a group, uses information

4.1 The information literate student applies new and prior information to the planning and creation of a particular product or performance. 4.2 The information literate student revises the development process for the

4.3 The information literate student communicates the product or performance

5.1 The information literate student understands many of the ethical, legal, and socio-economic issues surrounding information and information technology.

policies, and etiquette related to the access and use of information resources. 5.3 The information literate student acknowledges the use of information sources

5.2 The information literate student follows laws, regulations, institutional

in communicating the product or performance.

Table 1. ACRL Information Literacy Competency Standards for Higher Education with

Megan Oakleaf (2008, pp. 234-240) explains that instructors involved in information literacy instruction use "fixed-choice" tests and performance assessment (a.k.a. authentic assessment) most often to measure student learning in their courses, including distance learning courses. She identifies traditional models and behavioral theories of learning and educational measurement that serve as the theoretical basis for fixed-choice tests and relates them to early 20th century principles of scientific measurement. Fixed-choice tests include multiple-choice, matching, and true/false questions. They are conducive to quantitative assessment. On the other hand, performance or authentic assessment is based on constructivist educational theories which posit the idea that knowledge is created or "constructed" by individuals rather than passed on fully-formed from teacher to student. Learning, according to this theory, takes place through engagement and interaction in the real world, problem solving, critical thinking, and knowledge creation. Performance

The information literate student understands many of the economic, legal, and social issues surrounding the use of information and accesses and uses information ethically and

experts, and/or practitioners.

be revised.

effectively to accomplish a specific purpose. Performance Indicators

product or performance.

effectively to others.

Performance Indicators

Their Performance Indicators.

**5. Methodology** 

Standard Four

Standard Five

legally.

3.6 The information literate student validates understanding and interpretation of the information through discourse with other individuals, subject-area

3.7 The information literate student determines whether the initial query should

assignments are conducive to qualitative assessment. Both of these kinds of assessment can be readily matched with educational or learning standards and course learning-outcome objectives.

The quizzes and pre- and post-assessment tests in LIBR 1100 belong for the most part to the fixed-choice test category. The grading of these tests is therefore readily accomplished by the Blackboard system the instructors use to teach the course. However, the instructors must review, and occasionally edit, the machine-graded quizzes because of a handful of "fill-in" questions. On the other hand, the pre- and post-assessment tests are completely machine graded. As mentioned earlier, all of the questions in the quizzes and pre- and post-tests are matched to the course learning-outcome objectives, and test what the instructors want their students to learn and know. Since the course's learning-outcome objectives address the ACRL Information Literacy Competency Standards for Higher Education, along with their performance indicators, the questions in the pre- and post-assessment tests and the reading assignment quizzes also address the Standards.

LIBR 1100's practicums and the annotated bibliography assignment are authentic assessment methods that assess performance. Students complete these assignments successfully by performing several research tasks or operations. Completion of the bibliography represents an accomplishment that the instructors believe reflects a significant part of what is done when one performs library research. Therefore the students acquire several skills as they complete the practicums and the annotated bibliography. These performance assignments are also matched to the course's learning-outcome objectives and address some of the ACRL Standards and their performance indicators.

The findings and conclusions of this study relating to the quizzes, practicums, and annotated bibliography are based on grades assigned by the instructor. Though the grades for the quizzes are initially created automatically by Blackboard, the instructor reviews the answers and may revise the grades because of the "fill-in" questions. However the practicums and bibliography are graded without the benefit of any automatic system or the use of grading rubrics. The study's findings and conclusions relating to the machine-graded pre- and post-assessment tests are based on analysis of the input of the five students who took both tests. These students are treated as a single group. The reported frequencies and percentages of correct and incorrect answers pertain to the entire group of participating students. The students' answers on both tests were downloaded from the section's Blackboard site to a Microsoft Excel spreadsheet. The author used formulae available on the Excel software to tabulate all the data and determine the averages.

The pre- and post-tests were graded as an incentive for the students to try to do well. By taking the pre-assessment test the students could earn up to 15 points toward their final grade, and by taking the post-assessment test they could earn up to 75 points. Both the preand post-assessment tests contain the same questions. The instructors feel that the fourteen weeks between taking the tests is a sufficient period of time for their students to forget the questions answered in the test at the beginning of the semester. They plan to update the test regularly with new and revised questions and use it every semester. Also, the order of the questions will be regularly changed.

Tables 2 and 3 show the relationships of the course outcome objectives, the ACRL Information Literacy Competency Standards for Higher Education, along with their

Assessment Methods of Student Learning

test at the very end of the course.

improvement in the quality of the assessment.

did not use a grading rubric while reviewing the questions.

indicator.

necessary.

**6. Findings** 

in Web-Based Distance Courses: A Case Study 199

the post-assessment test question than on the pre-assessment question indicates that the students as a group had learned this outcome objective and standard performance

Each question in the test has one correct answer. Because the study's findings are based on comparisons of pre- and post-assessment answers, both for individual questions and in their aggregate, and since no cross tabulation tables are used to test relationships between variables, no statistical analysis other than the determination of totals and averages is

The pre- and post-assessment method of student learning outcomes is recognized as a legitimate way to measure what students are learning in class (Kidder, 1981, p. 45; Hernon, Dugan, &Schwartz, 2006, p. 137; Diamond, 2008, p. 163; Black & Wiliam, 2006; McMillan, 2001, 56-89). Often this method is administered as tests with questions, whether multiple choice, true or false, short-answer, or open-ended, and with the purpose of testing students' skills or what they know. Some instructors employing this method have used the same set of questions in both pre- and post-tests to evaluate a single group of students, and they made the effort to administer the pre-test before the course content was taught and the post-

As with all testing methods, the reliability and validity of the pre- and post-test method for determining accurate measurements of what students have learned is entirely dependent on the test itself. The integrity of the questions, the test's design, and its method of application affect the reliability and validity of a testing instrument. In his 1993 article "Evaluating Library Instruction: Doing the Best You Can with What You Have," Donald Barclay provides an interesting examination of pre- and post-tests and the kinds of questions that instructors could include in such tests (Barclay, 1993, p. 197-198, 201). He concludes his article with the observation that, though assessment may not always meet the highest standards of scientific rigor, this should not deter instructors from implementing them. Early attempts at assessment can serve as a spur to begin the process of continuous

Six students were enrolled in the distance section of LIBR 1100 in the fall of 2010. Five of these students actively participated in all the course assignments. One student hardly participated at all and received several 0 point scores. All of the study's findings are based on the input of the five students who actively participated. They were required to take 12 quizzes and could earn a maximum of 8 points on eleven of the quizzes and five points on one of them. Table 4 includes the students' scores on all 12 quizzes. The great majority of the questions on the quizzes were graded automatically by Blackboard. However, the instructor who taught the online course in the fall of 2010 reviewed the handful of fill-in questions and, using her own judgment, determined whether the fill-in answers were correct or not. She

The students also took 6 practicums, and their scores on these are reported in Table 5. Practicum one required the students to determine a thesis for their annotated bibliography assignment. Practicum two required that they find books on the topic of the thesis using keywords derived from the thesis statement. They searched for the books in online catalogs.


Table 2. Relationship of LIBR 1100 Course Outcome Objectives to ACRL Information Literacy Performance Indicators.


Table 3. Pre-test and Post-test Scores Based on Test Questions and Performance Indicators.

performance indicators, and the assessment test questions. Each pair of pre- and postassessment scores in Table 3 (the pre-assessment score before the slash, followed by the post-assessment score) corresponding to the question number in that row is meant to serve as a rough measure of how well the students knew or had learned a particular learning point addressed by the corresponding question and standard. A higher score on

3 4.1 4.2 4.3

Table 2. Relationship of LIBR 1100 Course Outcome Objectives to ACRL Information

1.1 etc. identifies the Standard number and Performance Indicator number addressed by

4 3.1 3.2 3.3 3.4 3.5 3.6 3.7 5.2 5.3

1.1 1.2 1.3 1.4 2.1 2.2 2.3 2.4 2.5 3.1 3.2 3.3 3.4 3.5 3.6 3.7 4.1 4.2 4.3 5.1 5.2 5.3

<sup>80</sup>

<sup>80</sup>

<sup>60</sup>

<sup>80</sup>

Table 3. Pre-test and Post-test Scores Based on Test Questions and Performance Indicators.

performance indicators, and the assessment test questions. Each pair of pre- and postassessment scores in Table 3 (the pre-assessment score before the slash, followed by the post-assessment score) corresponding to the question number in that row is meant to serve as a rough measure of how well the students knew or had learned a particular learning point addressed by the corresponding question and standard. A higher score on

<sup>100</sup>

ACRL Information Literacy Performance Indicators

ACRL Information Literacy Performance Indicators

#/# identifies pre-test and post-test scores.

<sup>80</sup> <sup>3</sup>80/

<sup>7</sup>20/

<sup>100</sup>

<sup>80</sup>

<sup>40</sup>

<sup>100</sup>

<sup>12</sup>60/

<sup>14</sup>20/

<sup>100</sup>

<sup>100</sup>

<sup>60</sup>

the outcome objective.

2 2.1 2.2 2.3 2.4 2.5

1 1.1 1.2 1.3 1.4

Literacy Performance Indicators.

<sup>1</sup>40/

<sup>4</sup>40/

<sup>5</sup>100/

<sup>100</sup>

<sup>8</sup>60/

<sup>9</sup>100/

<sup>10</sup>20/

<sup>11</sup>100/

<sup>80</sup>

<sup>15</sup>80/

<sup>6</sup>80/

<sup>13</sup>80/

<sup>2</sup>80/

LIBR 1100 Outcome Objective

Assessment Test Question

the post-assessment test question than on the pre-assessment question indicates that the students as a group had learned this outcome objective and standard performance indicator.

Each question in the test has one correct answer. Because the study's findings are based on comparisons of pre- and post-assessment answers, both for individual questions and in their aggregate, and since no cross tabulation tables are used to test relationships between variables, no statistical analysis other than the determination of totals and averages is necessary.

The pre- and post-assessment method of student learning outcomes is recognized as a legitimate way to measure what students are learning in class (Kidder, 1981, p. 45; Hernon, Dugan, &Schwartz, 2006, p. 137; Diamond, 2008, p. 163; Black & Wiliam, 2006; McMillan, 2001, 56-89). Often this method is administered as tests with questions, whether multiple choice, true or false, short-answer, or open-ended, and with the purpose of testing students' skills or what they know. Some instructors employing this method have used the same set of questions in both pre- and post-tests to evaluate a single group of students, and they made the effort to administer the pre-test before the course content was taught and the posttest at the very end of the course.

As with all testing methods, the reliability and validity of the pre- and post-test method for determining accurate measurements of what students have learned is entirely dependent on the test itself. The integrity of the questions, the test's design, and its method of application affect the reliability and validity of a testing instrument. In his 1993 article "Evaluating Library Instruction: Doing the Best You Can with What You Have," Donald Barclay provides an interesting examination of pre- and post-tests and the kinds of questions that instructors could include in such tests (Barclay, 1993, p. 197-198, 201). He concludes his article with the observation that, though assessment may not always meet the highest standards of scientific rigor, this should not deter instructors from implementing them. Early attempts at assessment can serve as a spur to begin the process of continuous improvement in the quality of the assessment.
