PRINCIPLES OF ADULT LEARNING


By Stephen Lieb
Senior Technical Writer and Planner, Arizona Department of Health Services
and part-time Instructor, South Mountain Community College
from VISION, Fall 1991

 

Adults As Learners


Part of being an effective instructor involves understanding how adults learn best. Compared to children and teens, adults have special needs and requirements as learners. Despite the apparent truth, adult learning is a relatively new area of study. The field of adult learning was pioneered by Malcom Knowles.
He identified the following characteristics of adult learners:

  • Adults are autonomous and self-directed. They need to be free to direct themselves. Their teachers must actively involve adult participants in the learning process and serve as facilitators for them. Specifically, they must get participants’ perspectives about what topics to cover and let them work on projects that reflect their interests. They should allow the participants to assume responsibility for presentations and group leadership. They have to be sure to act as facilitators, guiding participants to their own knowledge rather than supplying them with facts. Finally, they must show participants how the class will help them reach their goals (e.g., via a personal goals sheet).
  • Adults have accumulated a foundation of life experiences and knowledge that may include work-related activities, family responsibilities, and previous education. They need to connect learning to this knowledge/experience base. To help them do so, they should draw out participants’ experience and knowledge which is relevant to the topic. They must relate theories and concepts to the participants and recognize the value of experience in learning.
  • Adults are goal-oriented. Upon enrolling in a course, they usually know what goal they want to attain. They, therefore, appreciate an educational program that is organized and has clearly defined elements. Instructors must show participants how this class will help them attain their goals. This classification of goals and course objectives must be done early in the course.
  • Adults are relevancy-oriented. They must see a reason for learning something. Learning has to be applicable to their work or other responsibilities to be of value to them. Therefore, instructors must identify objectives for adult participants before the course begins. This means, also, that theories and concepts must be related to a setting familiar to participants. This need can be fulfilled by letting participants choose projects that reflect their own interests.
  • Adults are practical, focusing on the aspects of a lesson most useful to them in their work. They may not be interested in knowledge for its own sake. Instructors must tell participants explicitly how the lesson will be useful to them on the job.
  • As do all learners, adults need to be shown respect. Instructors must acknowledge the wealth of experiences that adult participants bring to the classroom. These adults should be treated as equals in experience and knowledge and allowed to voice their opinions freely in class.

Motivating the Adult Learner


Another aspect of adult learning is motivation. At least six factors serve as sources of motivation for adult learning:

  • Social relationships: to make new friends, to meet a need for associations and friendships.
  • External expectations: to comply with instructions from someone else; to fulfill the expectations or recommendations of someone with formal authority.
  • Social welfare: to improve ability to serve mankind, prepare for service to the community, and improve ability to participate in community work.
  • Personal advancement: to achieve higher status in a job, secure professional advancement, and stay abreast of competitors.
  • Escape/Stimulation: to relieve boredom, provide a break in the routine of home or work, and provide a contrast to other exacting details of life.
  • Cognitive interest: to learn for the sake of learning, seek knowledge for its own sake, and to satisfy an inquiring mind.

Barriers and Motivation


Unlike children and teenagers, adults have many responsibilities that they must balance against the demands of learning. Because of these responsibilities, adults have barriers against participating in learning. Some of these barriers include lack of time, money, confidence, or interest, lack of information about opportunities to learn, scheduling problems, “red tape,” and problems with child care and transportation.

Motivation factors can also be a barrier. What motivates adult learners? Typical motivations include a requirement for competence or licensing, an expected (or realized) promotion, job enrichment, a need to maintain old skills or learn new ones, a need to adapt to job changes, or the need to learn in order to comply with company directives.

The best way to motivate adult learners is simply to enhance their reasons for enrolling and decrease the barriers. Instructors must learn why their students are enrolled (the motivators); they have to discover what is keeping them from learning. Then the instructors must plan their motivating strategies. A successful strategy includes showing adult learners the relationship between training and an expected promotion.

Learning Tips for Effective Instructors


Educators must remember that learning occurs within each individual as a continual process throughout life. People learn at different speeds, so it is natural for them to be anxious or nervous when faced with a learning situation. Positive reinforcement by the instructor can enhance learning, as can proper timing of the instruction.

Learning results from stimulation of the senses. In some people, one sense is used more than others to learn or recall information. Instructors should present materials that stimulates as many senses as possible in order to increase their chances of teaching success.

There are four critical elements of learning that must be addressed to ensure that participants learn. These elements are

  1. motivation
  2. reinforcement
  3. retention
  4. transference

Motivation. If the participant does not recognize the need for the information (or has been offended or intimidated), all of the instructor’s effort to assist the participant to learn will be in vain. The instructor must establish rapport with participants and prepare them for learning; this provides motivation. Instructors can motivate students via several means:

  • Set a feeling or tone for the lesson. Instructors should try to establish a friendly, open atmosphere that shows the participants they will help them learn.
  • Set an appropriate level of concern. The level of tension must be adjusted to meet the level of importance of the objective. If the material has a high level of importance, a higher level of tension/stress should be established in the class. However, people learn best under low to moderate stress; if the stress is too high, it becomes a barrier to learning.
  • Set an appropriate level of difficulty. The degree of difficulty should be set high enough to challenge participants but not so high that they become frustrated by information overload. The instruction should predict and reward participation, culminating in success.

In addition, participants need specific knowledge of their learning results (feedback ). Feedback must be specific, not general. Participants must also see a reward for learning. The reward does not necessarily have to be monetary; it can be simply a demonstration of benefits to be realized from learning the material. Finally, the participant must be interested in the subject. Interest is directly related to reward. Adults must see the benefit of learning in order to motivate themselves to learn the subject.

Reinforcement. Reinforcement is a very necessary part of the teaching/learning process; through it, instructors encourage correct modes of behavior and performance.

  • Positive reinforcement is normally used by instructors who are teaching participants new skills. As the name implies, positive reinforcement is “good” and reinforces “good” (or positive) behavior.
  • Negative reinforcement is normally used by instructors teaching a new skill or new information. It is useful in trying to change modes of behavior. The result of negative reinforcement is extinction — that is, the instructor uses negative reinforcement until the “bad” behavior disappears, or it becomes extinct. (To read more about negative reinforcement, you can check out Maricopa Center for Learning & Instruction Negative Reinforcement Univeristy.)

When instructors are trying to change behaviors (old practices), they should apply both positive and negative reinforcement.

Reinforcement should be part of the teaching-learning process to ensure correct behavior. Instructors need to use it on a frequent and regular basis early in the process to help the students retain what they have learned. Then, they should use reinforcement only to maintain consistent, positive behavior.

Retention. Students must retain information from classes in order to benefit from the learning. The instructors’ jobs are not finished until they have assisted the learner in retaining the information. In order for participants to retain the information taught, they must see a meaning or purpose for that information. The must also understand and be able to interpret and apply the information. This understanding includes their ability to assign the correct degree of importance to the material.

The amount of retention will be directly affected by the degree of original learning. Simply stated, if the participants did not learn the material well initially, they will not retain it well either.

Retention by the participants is directly affected by their amount of practice during the learning. Instructors should emphasize retention and application. After the students demonstrate correct (desired) performance, they should be urged to practice to maintain the desired performance. Distributed practice is similar in effect to intermittent reinforcement.

Transference. Transfer of learning is the result of training — it is the ability to use the information taught in the course but in a new setting. As with reinforcement, there are two types of transfer: positive and negative.

  • Positive transference, like positive reinforcement, occurs when the participants uses the behavior taught in the course.
  • Negative transference, again like negative reinforcement, occurs when the participants do not do what they are told not to do. This results in a positive (desired) outcome.

Transference is most likely to occur in the following situations:

  • Association — participants can associate the new information with something that they already know.
  • Similarity — the information is similar to material that participants already know; that is, it revisits a logical framework or pattern.
  • Degree of original learning — participant’s degree of original learning was high.
  • Critical attribute element — the information learned contains elements that are extremely beneficial (critical) on the job.

Although adult learning is relatively new as field of study, it is just as substantial as traditional education and carries and potential for greater success. Of course, the heightened success requires a greater responsibility on the part of the teacher. Additionally, the learners come to the course with precisely defined expectations. Unfortunately, there are barriers to their learning. The best motivators for adult learners are interest and selfish benefit. If they can be shown that the course benefits them pragmatically, they will perform better, and the benefits will be longer lasting.

CLASSROOM ASSESSMENT TECHNIQUES

By Thomas A. Angelo and K. Patricia Cross
From
Classroom Assessment Techniques, A Handbook for College Teachers, 2nd Ed.

 

In the 1990’s, educational reformers are seeking answers to two fundamental questions: (1) How well are students learning? and (2) How effectively are teachers teaching? Classroom Research and Classroom Assessment respond directly to concerns about better learning and more effective teaching. Classroom Research was developed to encourage college teachers to become more systematic and sensitive observers of learning as it takes place every day in their classrooms. Faculty have an exceptional opportunity to use their classrooms as laboratories for the study of learning and through such study to develop a better understanding of the learning process and the impact of their teaching upon it. Classroom Assessment, a major component of Classroom Research, involves student and teachers in the continuous monitoring of students’ learning. It provides faculty with feedback about their effectiveness as teachers, and it gives students a measure of their progress as learners. Most important, because Classroom Assessments are created, administered, and analyzed by teachers themselves on questions of teaching and learning that are important to them, the likelihood that instructors will apply the results of the assessment to their own teaching is greatly enhances.

Through close observation of students in the process of learning, the collection of frequent feedback on students’ learning, and the design of modest classroom experiments, teachers can learn much about how students learn and, more specifically, how students respond to particular teaching approaches. Classroom Assessment helps individual college teachers obtain useful feedback on what, how much, and how well their students are learning. Faculty can then use this information to refocus their teaching to help students make their learning more efficient and more effective.

College instructors who have assumed that their students were learning what they were trying to teach them are regularly faced with disappointing evidence to the contrary when they grade tests and term papers. Too often, students have not learned as much or as well as was expected. There are gaps, sometimes considerable ones, between what was taught and what has been learned. By the time faculty notice these gaps in knowledge or understanding, it is frequently too late to remedy the problems.

To avoid such unhappy surprises, faculty and students need better ways to monitor learning throughout the semester. Specifically, teachers need a continuous flow of accurate information on student learning. For example, if a teacher’s goal is to help students learn points “A” through “Z” during the course, then that teacher needs first to know whether all students are really starting at point “A” and, as the course proceeds, whether they have reached intermediate points “B,” “G,” “L,” “R,” “W,” and so on. To ensure high-quality learning, it is not enough to test students when the syllabus has arrived at points “M” and “Z.” Classroom Assessment is particularly useful for checking how well students are learning at those initial and intermediate points, and for providing information for improvement when learning is less than satisfactory.

Through practice in Classroom Assessment, faculty become better able to understand and promote learning, and increase their ability to help the students themselves become more effective, self-assessing, self-directed learners. Simply put, the central purpose of Classroom Assessment is to empower both teachers and their students to improve the quality of learning in the classroom.

Classroom Assessment is an approach designed to help teachers find out what students are learning in the classroom and how well they are learning it. This approach has the following characteristics:

  • Learner-Centered

Classroom Assessment focuses the primary attention of teachers and students on observing and improving learning, rather than on observing and improving teaching. Classroom Assessment can provide information to guide teachers and students in making adjustments to improve learning.

  • Teacher-Directed

Classroom Assessment respects the autonomy, academic freedom, and professional judgement of college faculty. The individual teacher decides what to assess, how to assess, and how to respond to the information gained through the assessment. Also, the teacher is not obliged to share the result of Classroom Assessment with anyone outside the classroom.

  • Mutually Beneficial

Because it is focused on learning, Classroom Assessment requires the active participation of students. By cooperating in assessment, students reinforce their grasp of the course content and strengthen their own skills at self-assessment. Their motivation is increased when they realize that faculty are interested and invested in their success as learners. Faculty also sharpen their teaching focus by continually asking themselves three questions: “What are the essential skills and knowledge I am trying to Teach?” “How can I find out whether students are learning them?” “How can I help students learn better?” As teachers work closely with students to answer these questions, they improve their teaching skills and gain new insights.

  • Formative

Classroom Assessment’s purpose is to improve the quality of student learning, not to provide evidence for evaluating or grading students. The assessment is almost never graded and are almost always anonymous.

  • Context-Specific

Classroom Assessments have to respond to the particular needs and characteristics of the teachers, students, and disciplines to which they are applied. What works well in one class will not necessary work in another.

  • Ongoing

Classroom Assessment is an ongoing process, best thought of as the creating and maintenance of a classroom “feedback loop.” By using a number of simple Classroom Assessment Techniques that are quick and easy to use, teachers get feedback from students on their learning. Faculty then complete the loop by providing students with feedback on the results of the assessment and suggestions for improving learning. To check on the usefulness of their suggestions, faculty use Classroom Assessment again, continuing the “feedback loop.” As the approach becomes integrated into everyday classroom activities, the communications loop connecting faculty and students — and teaching and learning — becomes more efficient and more effective.

  • Rooted in Good Teaching Practice

Classroom Assessment is an attempt to build on existing good practice by making feedback on students’ learning more systematic, more flexible, and more effective. Teachers already ask questions, react to students’ questions, monitor body language and facial expressions, read homework and tests, and so on. Classroom Assessment provides a way to integrate assessment systematically and seamlessly into the traditional classroom teaching and learning process

As they are teaching, faculty monitor and react to student questions, comments, body language, and facial expressions in an almost automatic fashion. This “automatic” information gathering and impression formation is a subconscious and implicit process. Teachers depend heavily on their impressions of student learning and make important judgments based on them, but they rarely make those informal assessments explicit or check them against the students’ own impressions or ability to perform. In the course of teaching, college faculty assume a great deal about their students’ learning, but most of their assumptions remain untested.

Even when college teachers routinely gather potentially useful information on student learning through questions, quizzes, homework, and exams, it is often collected too late — at least from the students’ perspective – to affect their learning. In practice, it is very difficult to “de-program” students who are used to thinking of anything they have been tested and graded on as being “over and done with.” Consequently, the most effective times to assess and provide feedback are before the chapter tests or the midterm an final examinations. Classroom Assessment aims at providing that early feedback.

Classroom Assessment is based on seven assumptions:

  1. The quality of student learning is directly, although not exclusively, related to the quality of teaching. Therefore, one of the most promising ways to improve learning is to improve teaching.
  2. To improve their effectiveness, teachers need first to make their goals and objectives explicit and then to get specific, comprehensible feedback on the extent to which they are achieving those goals and objectives.
  3. To improve their learning, students need to receive appropriate and focused feedback early and often; they also need to learn how to assess their own learning.
  4. The type of assessment most likely to improve teaching and learning is that conducted by faculty to answer questions they themselves have formulated in response to issues or problems in their own teaching.
  5. Systematic inquiry and intellectual challenge are powerful sources of motivation, growth, and renewal for college teachers, and Classroom Assessment can provide such challenge.
  6. Classroom Assessment does not require specialized training; it can be carried out by dedicated teachers from all disciplines.
  7. By collaborating with colleagues and actively involving students in Classroom Assessment efforts, faculty (and students) enhance learning and personal satisfaction.

To begin Classroom Assessment it is recommended that only one or two of the simplest Classroom Assessment Techniques are tried in only one class. In this way very little planning or preparation time and energy of the teacher and students is risked. In most cases, trying out a simple Classroom Assessment Technique will require only five to ten minutes of class time and less than an hour of time out of class. After trying one or two quick assessments, the decision as to whether this approach is worth further investments of time and energy can be made. This process of starting small involves three steps:

Step 1: Planning

Select one, and only one, of your classes in which to try out the Classroom Assessment. Decide on the class meeting and select a Classroom Assessment Technique. Choose a simple and quick one.

Step 2: Implementing

Make sure the students know what you are doing and that they clearly understand the procedure. Collect the responses and analyze them as soon as possible.

Step 3: Responding

To capitalize on time spent assessing, and to motivate students to become actively involved, “close the feedback loop” by letting them know what you learned from the assessments and what difference that information will make.

Five suggestions for a successful start:

  1. If a Classroom Assessment Techniques does not appeal to your intuition and professional judgement as a teacher, don’t use it.
  2. Don’t make Classroom Assessment into a self-inflicted chore or burden.
  3. Don’t ask your students to use any Classroom Assessment Technique you haven’t previously tried on yourself.
  4. Allow for more time than you think you will need to carry out and respond to the assessment.
  5. Make sure to “close the loop.” Let students know what you learn from their feedback and how you and they can use that information to improve learning

CLASSROOM ASSESSMENT TECHNIQUE EXAMPLES

By Thomas A. Angelo and K. Patricia Cross
From
Classroom Assessment Techniques: A Handbook for College Teachers, 2nd Ed.

 

Fifty Classroom Assessment Techniques are presented in this book. The book is in the HCC library if you want additional techniques or additional information on the five described below. These techniques are to be used as starting points, ideas to be adapted and improved upon.

Background Knowledge Probe

 

Description:

At the first class meeting, many college teachers ask students for general information on their level of preparation, often requesting that students list courses they have already taken in the relevant field. This technique is designed to collect much more specific, and more useful, feedback on students’ prior learning. Background Knowledge Probes are short, simple questionnaires prepared by instructors for use at the beginning of a course, at the start of a new unit or lesson, or prior to introducing an important new topic. A given Background Knowledge Probe may require students to write short answers, to circle the correct response to multiple-choice questions, or both.

Step-by-Step Procedure:

  1. 1.Before introducing an important new concept, subject, or topic in the course syllabus, consider what the students may already know about it. Recognizing that their knowledge may be partial, fragmentary, simplistic, or even incorrect, try to find at lease one point that most students are likely to know, and use that point to lead into others, less familiar points.
  2. 2.Prepare two or three open-ended questions, a handful of short-answer questions, or ten to twenty multiple-choice questions that will probe the students’ existing knowledge of that concept, subject, or topic. These questions need to be carefully phrased, since a vocabulary that may not be familiar to the students can obscure your assessment of how well they know the facts or concepts.
  3. 3.Write your open-ended questions on the chalkboard, or hand out short questionnaires. Direct student to answer open-ended questions succinctly, in two or three sentences if possible. Make a point of announcing that these Background Knowledge Probes are not tests or quizzes and will not be graded. Encourage students to give thoughtful answers that will help you make effective instructional decisions.
  4. 4.At the next class meeting, or as soon as possible, let students know the results, and tell them how that information will affect what you do as the teacher and how it should affect what they do as learners.

Minute Paper

 

Description:

No other technique has been used more often or by more college teachers than the Minute Paper. This technique — also known as the One-Minute Paper and the Half-Sheet Response — provides a quick and extremely simple way to collect written feedback on student learning. To use the Minute Paper, an instructor stops class two or three minutes early and asks students to respond briefly to some variation on the following two questions: “What was the most important thing you learned during this class?” and “What important question remains unanswered?” Students they write their responses on index cards or half-sheets of scrap paper and hand them in.

Step-by-Step Procedure:

  1. 1.Decide first what you want to focus on and, as a consequence, when to administer the Minute Paper. If you want to focus on students’ understanding of a lecture, the last few minutes of class may be the best time. If your focus is on a prior homework assignment, however, the first few minutes may be more appropriate.
  2. 2.Using the two basic questions from the “Description” above as starting points, write Minute Paper prompts that fit your course and students. Try out your Minute Paper on a colleague or teaching assistant before using it in class.
  3. 3.Plan to set aside five to ten minutes of your next class to use the technique, as well as time later to discuss the results.
  4. 4.Before class, write one or, at the most, two Minute Paper questions on the chalkboard or prepare an overhead transparency.
  5. 5.At a convenient time, hand out index cards or half-sheets of scrap paper.
  6. 6.Unless there is a very good reason to know who wrote what, direct students to leave their names off the papers or cards.
  7. 7.Let the students know how much time they will have (two to five minutes per question is usually enough), what kinds of answers you want (words, phrases, or short sentences), and when they can expect your feedback.

Muddiest Point

 

Description:

The Muddiest Point is just about the simplest technique one can use. It is also remarkable efficient, since it provides a high information return for a very low investment of time and energy. The technique consists of asking students to jot down a quick response to one question: “What was the muddiest point in ……..?” The focus of the Muddiest Point assessment might be a lecture, a discussion, a homework assignment, a play, or a film.

Step-by-Step Procedure:

  1. 1.Determine what you want feedback on: the entire class session or one self-contained segment? A lecture, a discussion, a presentation?
  2. 2.If you are using the technique in class, reserve a few minutes at the end of the class session. Leave enough time to ask the question, to allow students to respond, and to collect their responses by the usual ending time.
  3. 3.Let students know beforehand how much time they will have to respond and what use you will make of their responses.
  4. 4.Pass out slips of paper or index cards for students to write on.
  5. 5.Collect the responses as or before students leave. Stationing yourself at the door and collecting “muddy points” as students file out is one way; leaving a “muddy point” collection box by the exit is another.
  6. 6.Respond to the students’ feedback during the next class meeting or as soon as possible afterward.

One-Sentence Summary

 

Description:

This simple technique challenges students to answer the questions “Who does what to whom, when, where, how, and why?” (represented by the letters WDWWWWHW) about a given topic, and then to synthesize those answers into a simple informative, grammatical, and long summary sentence.

Step-by-Step Procedure:

  1. 1.Select an important topic or work that your students have recently studied in your course and that you expect them to learn to summarize.
  2. 2.Working as quickly as you can, answer the questions “Who Did/Does What to Whom, When, Where, How and Why?” in relation to that topic. Note how long this first step takes you.
  3. 3.Next, turn your answers into a grammatical sentence that follows WDWWWWHS pattern. Not how long this second step takes.
  4. 4.Allow your students up to twice as much time as it took you to carry out the task and give them clear direction on the One-Sentence Summary technique before you announce the topic to be summarized.

What’s the Principle?

 

Description:

After students figure out what type of problem they are dealing with, they often must then decide what principle or principles to apply in order to solve the problem. This technique focuses on this step in problem solving. It provides students with a few problems and asks them to state the principle that best applies to each problem.

Step-by-Step Procedure:

  1. 1.Identify the basic principles that you expect students to learn in your course. Make sure focus only on those that students have been taught.
  2. 2.Find or create sample problems or short examples that illustrate each of these principles. Each example should illustrate only one principle.
  3. 3.Create a What’s the Principle? form that includes a listing of the relevant principles and specific examples or problems for students to match to those principles.
  4. 4.Try out your assessment on a graduate student or colleague to make certain it is not too difficult or too time-consuming to use in class.
  5. 5.After you have make any necessary revisions to the form, apply the assessment.

QUIZZES, TESTS, AND EXAMS

By Barbara Gross Davis, University of California, Berkeley.
From
Tools for Teaching, copyright by Jossey-Bass. For purchase or reprint information,
contact
Jossey-Bass. Reprinted here with permission, September 1, 1999.

 

Many teachers dislike preparing and grading exams, and most students dread taking them. Yet tests are powerful educational tools that serve at least four functions. First, tests help you evaluate students and assess whether they are learning what you are expecting them to learn. Second, well-designed tests serve to motivate and help students structure their academic efforts. Crooks (1988), McKeachie (1986), and Wergin (1988) report that students study in ways that reflect how they think they will be tested. If they expect an exam focused on facts, they will memorize details; if they expect a test that will require problem solving or integrating knowledge, they will work toward understanding and applying information. Third, tests can help you understand how successfully you are presenting the material. Finally, tests can reinforce learning by providing students with indicators of what topics or skills they have not yet mastered and should concentrate on. Despite these benefits, testing is also emotionally charged and anxiety producing. The following suggestions can enhance your ability to design tests that are effective in motivating, measuring, and reinforcing learning.

A note on terminology: instructors often use the terms tests, exams, and even quizzes interchangeably. Test experts Jacobs and Chase (1992), however, make distinctions among them based on the scope of content covered and their weight or importance in calculating the final grade for the course. An examination is the most comprehensive form of testing, typically given at the end of the term (as a final) and one or two times during the semester (as midterms). A test is more limited in scope, focusing on particular aspects of the course material. A course might have three or four tests. A quiz is even more limited and usually is administered in fifteen minutes or less. Though these distinctions are useful, the terms test and exam will be used interchangeably throughout the rest of this section because the principles in planning, constructing, and administering them are similar.

General Strategies


Spend adequate amounts of time developing your tests. As you prepare a test, think carefully about the learning outcomes you wish to measure, the type of items best suited to those outcomes, the range of difficulty of items, the length and time limits for the test, the format and layout of the exam, and your scoring procedures.

Match your tests to the content you are teaching. Ideally, the tests you give will measure students’ achievement of your educational goals for the course. Test items should be based on the content and skills that are most important for your students to learn. To keep track of how well your tests reflect your objectives, you can construct a grid, listing your course objectives along the side of the page and content areas along the top. For each test item, check off the objective and content it covers. (Sources: Ericksen, 1969; Jacobs and Chase, 1992; Svinicki and Woodward, 1982)

Try to make your tests valid, reliable, and balanced. A test is valid if its results are appropriate and useful for making decisions about an aspect of students’ achievement (Gronlund and Linn, 1990). Technically, validity refers to the appropriateness of the interpretation of the results and not to the test itself, though colloquially we speak about a test being valid. Validity is a matter of degree and considered in relation to specific use or interpretation (Gronlund and Linn, 1990). For example, the results of a writing test may have a high degree of validity for indicating the level of a student’s composition skills, a moderate degree of validity for predicting success in later composition courses, and essentially no validity for predicting success in mathematics or physics. Validity can be difficult to determine. A practical approach is to focus on content validity, the extent to which the content of the test represents an adequate sampling of the knowledge and skills taught in the course. If you design the test to cover information in lectures and readings in proportion to their importance in the course, then the interpretations of test scores are likely to have greater validity An exam that consists of only a few difficult items, however, will not yield valid interpretations of what students know.

A test is reliable if it accurately and consistently evaluates a student’s performance. The purest measure of reliability would entail having a group of students take the same test twice and get the same scores (assuming that we could erase their memories of test items from the first administration). This is impractical, of course, but there are technical procedures for determining reliability. In general, ambiguous questions, unclear directions, and vague scoring criteria threaten reliability. Very short tests are also unlikely to be highly reliable. It is also important for a test to be balanced: to cover most of the main ideas and important concepts in proportion to the emphasis they received in class.

If you are interested in learning more about psychometric concepts and the technical properties of tests, here are some books you might review:

Ebel, R. L., and Frisbie, D. A. Essentials of Educational Measurement. (5th ed.) Englewood Cliffs, N.J.: Prentice-Hall, 1990.

Gronlund, N. E., and Linn, R. Measurement and Evaluation in Teaching. (6th ed.) New York: Macmillan, 1990.

Mehrens, W. A., and Lehmann, I. J. Measurement and Evaluation in Education and Psychology. (4th ed.) New York: Holt, Rinehart & Winston, 1991.

Use a variety of testing methods. Research shows that students vary in their preferences for different formats, so using a variety of methods will help students do their best (Jacobs and Chase, 1992). Multiple-choice or shortanswer questions are appropriate for assessing students’ mastery of details and specific knowledge, while essay questions assess comprehension, the ability to integrate and synthesize, and the ability to apply information to new situations. A single test can have several formats. Try to avoid introducing a new format on the final exam: if you have given all multiple-choice quizzes or midterms, don’t ask students to write an all-essay final. (Sources: Jacobs and Chase, 1992; Lowman, 1984; McKeachie, 1986; Svinicki, 1987)

Write questions that test skills other than recall. Research shows that most tests administered by faculty rely too heavily on students’ recall of information (Milton, Pollio, and Eison, 1986). Bloom (1956) argues that it is important for tests to measure higher-learning as well. Fuhrmann and Grasha (1983, p. 170) have adapted Bloom’s taxonomy for test development. Here is a condensation of their list:

To measure knowledge (common terms, facts, principles, procedures), ask these kinds of questions: Define, Describe, Identify, Label, List, Match, Name, Outline, Reproduce, Select, State. Example: “List the steps involved in titration.”

To measure comprehension (understanding of facts and principles, interpretation of material), ask these kinds of questions: Convert, Defend, Distinguish, Estimate, Explain, Extend, Generalize, Give examples, Infer, Predict, Summarize. Example: “Summarize the basic tenets of deconstructionism.”

To measure application (solving problems, applying concepts and principles to new situations), ask these kinds of questions: Demonstrate, Modify, Operate, Prepare, Produce, Relate, Show, Solve, Use. Example: “Calculate the deflection of a beam under uniform loading.”

To measure analysis (recognition of unstated assumptions or logical fallacies, ability to distinguish between facts and inferences), ask these kinds of questions: Diagram, Differentiate, Distinguish, Illustrate, Infer, Point out, Relate, Select, Separate, Subdivide. Example: “In the president’s State of the Union Address, which statements are based on facts and which are based on assumptions?”

To measure synthesis (integrate learning from different areas or solve problems by creative thinking), ask these kinds of questions: Categorize, Combine, Compile, Devise, Design, Explain, Generate, Organize, Plan, Rearrange, Reconstruct, Revise, Tell. Example: “How would you restructure the school day to reflect children’s developmental needs?”

To measure evaluation (judging and assessing), ask these kinds of questions: Appraise, Compare, Conclude, Contrast, Criticize, Describe, Discriminate, Explain, Justify, Interpret, Support. Example: “Why is Bach’s Mass in B Minor acknowledged as a classic?”

Many faculty members have found it difficult to apply this six-level taxonomy, and some educators have simplified and collapsed the taxonomy into three general levels (Crooks, 1988): The first category knowledge (recall or recognition of specific information). The second category combines comprehension and application. The third category is described as “problem solving,” transferring existing knowledge and skills to new situations.

If your course has graduate student instructors (GSIs), involve them in designing exams. At the least, ask your GSIs to read your draft of the exam and comment on it. Better still, involve them in creating the exam. Not only will they have useful suggestions, but their participation in designing an exam will help them grade the exam.

Take precautions to avoid cheating.See “Preventing Academic Dishonesty

Types of Tests


Multiple-choice tests. Multiple-choice items can be used to measure both simple knowledge and complex concepts. Since multiple-choice questions can be answered quickly, you can assess students’ mastery of many topics on an hour exam. In addition, the items can be easily and reliably scored. Good multiple-choice questions are difficult to write-see “Multiple-Choice and Matching Tests” for guidance on how to develop and administer this type of test.

True-false tests. Because random guessing will produce the correct answer half the time, true-false tests are less reliable than other types of exams. However, these items are appropriate for occasional use. Some faculty who use true-false questions add an “explain” column in which students write one or two sentences justifying their response.

Matching tests. The matching format is an effective way to test students’ recognition of the relationships between words and definitions, events and dates, categories and examples, and so on. See “Multiple-Choice and Matching Tests” for suggestions about developing this type of test.

Essay tests. Essay tests enable you to judge students’ abilities to organize, integrate, interpret material, and express themselves in their own words. Research indicates that students study more efficiently for essay-type examinations than for selection (multiple-choice) tests: students preparing for essay tests focus on broad issues, general concepts, and interrelationships rather than on specific details, and this studying results in somewhat better student performance regardless of the type of exam they are given (McKeachie, 1986). Essay tests also give you an opportunity to comment on students’ progress, the quality of their thinking, the depth of their understanding, and the difficulties they may be having. However, because essay tests pose only a few questions, their content validity may be low. In addition, the reliability of essay tests is compromised by subjectivity or inconsistencies in grading. For specific advice, see “Short-Answer and Essay Tests.” (Sources: Ericksen, 1969, McKeachie, 1986)

A variation of an essay test asks students to correct mock answers. One faculty member prepares a test that requires students to correct, expand, or refute mock essays. Two weeks before the exam date, he distributes ten to twelve essay questions, which he discusses with students in class. For the actual exam, he selects four of the questions and prepares well-written but intellectually flawed answers for the students to edit, correct, expand, and refute. The mock essays contain common misunderstandings, correct but incomplete responses, or absurd notions; in some cases the answer has only one or two flaws. He reports that students seem to enjoy this type of test more than traditional examinations.

Short-answer tests. Depending on your objectives, short-answer questions can call for one or two sentences or a long paragraph. Short-answer tests are easier to write, though they take longer to score, than multiple-choice tests.

They also give you some opportunity to see how well students can express their thoughts, though they are not as useful as longer essay responses for this purpose. See “Short-Answer and Essay Tests” for detailed guidelines.

Problem sets. In courses in mathematics and the sciences, your tests can include problem sets. As a rule of thumb, allow students ten minutes to solve a problem you can do in two minutes. See “Homework: Problem Sets” for advice on creating and grading problem sets.

Oral exams. Though common at the graduate level, oral exams are rarely used for undergraduates except in foreign language classes. In other classes they are usually time-consuming, too anxiety provoking for students, and difficult to score unless the instructor tape-records the answers. However, a math professor has experimented with individual thirty-minute oral tests in a small seminar class. Students receive the questions in advance and are allowed to drop one of their choosing. During the oral exam, the professor probes students’ level of understanding of the theory and principles behind the theorems. He reports that about eight students per day can be tested.

Performance tests. Performance tests ask students to demonstrate proficiency in conducting an experiment, executing a series of steps in a reasonable amount of time, following instructions, creating drawings, manipulating materials or equipment, or reacting to real or simulated situations. Performance tests can be administered individually or in groups. They are seldom used in colleges and universities because they are logistically difficult to set up, hard to score, and the content of most courses does not necessarily lend itself to this type of testing. However, performance tests can be useful in classes that require students to demonstrate their skills (for example, health fields, the sciences, education). If you use performance tests, Anderson (1987, p. 43) recommends that you do the following (I have slightly modified her list):

  • Specify the criteria to be used for rating or scoring (for example, the level of accuracy in performing the steps in sequence or completing the task within a specified time limit).
  • State the problem so that students know exactly what they are supposed to do (if possible, conditions of a performance test should mirror a real-life situation).
  • Give students a chance to perform the task more than once or to perform several task samples.

“Create-a-game” exams. For one midterm, ask students to create either a board game, word game, or trivia game that covers the range of information relevant to your course. Students must include the rules, game board, game pieces, and whatever else is needed to play. For example, students in a history of psychology class created “Freud’s Inner Circle,” in which students move tokens such as small cigars and toilet seats around a board each time they answer a question correctly, and “Psychogories,” a card game in which players select and discard cards until they have a full hand of theoretically compatible psychological theories, beliefs, or assumptions. (Source: Berrenberg and Prosser, 1991)

Alternative Testing Modes


Take-home tests. Take-home tests allow students to work at their own pace with access to books and materials. Take-home tests also permit longer and more involved questions, without sacrificing valuable class time for exams. Problem sets, short answers, and essays are the most appropriate kinds of take-home exams. Be wary, though, of designing a take-home exam that is too difficult or an exam that does not include limits on the number of words or time spent (Jedrey, 1984). Also, be sure to give students explicit instructions on what they can and cannot do: for example, are they allowed to talk to other students about their answers? A variation of a take-home test is to give the topics in advance but ask the students to write their answers in class. Some faculty hand out ten or twelve questions the week before an exam and announce that three of those questions will appear on the exam.

Open-book tests. Open-book tests simulate the situations professionals face every day, when they use resources to solve problems, prepare reports, or write memos. Open-book tests tend to be inappropriate in introductory courses in which facts must be learned or skills thoroughly mastered if the student is to progress to more complicated concepts and techniques in advanced courses. On an open-book test, students who are lacking basic knowledge may waste too much of their time consulting their references rather than writing. Open-book tests appear to reduce stress (Boniface, 1985; Liska and Simonson, 1991), but research shows that students do not necessarily perform significantly better on open-book tests (Clift and Imrie, 1981; Crooks, 1988). Further, open-book tests seem to reduce students’ motivation to study. A compromise between open- and closed-book testing is to let students bring an index card or one page of notes to the exam or to distribute appropriate reference material such as equations or formulas as part of the test.

Group exams. Some faculty have successfully experimented with group exams, either in class or as take-home projects. Faculty report that groups outperform individuals and that students respond positively to group exams (Geiger, 1991; Hendrickson, 1990; Keyworth, 1989; Toppins 1989). For example, for a fifty-minute in-class exam, use a multiple-choice test of about twenty to twenty-five items. For the first test, the groups can be randomly divided. Groups of three to five students seem to work best. For subsequent tests, you may want to assign students to groups in ways that minimize differences between group scores and balance talkative and quiet students. Or you might want to group students who are performing at or near the same level (based on students’ performance on individual tests). Some faculty have students complete the test individually before meeting as a group. Others just let the groups discuss the test, item by item. In the first case, if the group score is higher than the individual score of any member, bonus points are added to each individual’s score. In the second case, each student receives the score of the group. Faculty who use group exams offer the following tips:

  • Ask students to discuss each question fully and weigh the merits of each answer rather than simply vote on an answer.
  • If you assign problems, have each student work a problem and then compare results.
  • If you want students to take the exam individually first, consider devoting two class periods to tests; one for individual work and the other for group.
  • Show students the distribution of their scores as individuals and as groups; in most cases group scores will be higher than any single individual score.

A variation of this idea is to have students first work on an exam in groups outside of class. Students then complete the exam individually during class time and receive their own score. Some portion of the test items are derived from the group exam. The rest are new questions. Or let students know in advance you will be asking them to justify a few of their responses; this will keep students from blithely relying on their work group for all the answers. (Sources: Geiger, 1991; Hendrickson, 1990; Keyworth, 1989; Murray, 1990; Toppins, 1989)

Paired testing. For paired exams, pairs of students work on a single essay exam, and the two students turn in one paper. Some students may be reluctant to share a grade, but good students will most likely earn the same grade they would have working alone. Pairs can be self-selected or assigned. For example, pairing a student who is doing well in the course with one not doing well allows for some peer teaching. A variation is to have students work in teams but submit individual answer sheets. (Source: Murray, 1990)

Portfolios. A portfolio is not a specific test but rather a cumulative collection of a student’s work. Students decide what examples to include that characterize their growth and accomplishment over the term. While most common in composition classes, portfolios are beginning to be used in other disciplines to provide a fuller picture of students’ achievements. A student’s portfolio might include sample papers (first drafts and revisions), journal entries, essay exams, and other work representative of the student’s progress. You can assign portfolios a letter grade or a pass/not pass. If you do grade portfolios, you will need to establish clear criteria. (Source: Jacobs and Chase, 1992)

Construction of Effective Exams


Prepare new exams each time you teach a course. Though it is timeconsuming to develop tests, a past exam may not reflect changes in how you have presented the material or which topics you have emphasized in the course. If you do write a new exam, you can make copies of the old exam available to students.

Make up test items throughout the term. Don’t wait until a week or so before the exam. One way to make sure the exam reflects the topics emphasized in the course is to write test questions at the end of each class session and place them on index cards or computer files for later sorting. Software that allows you to create test banks of items and generate exams from the pool is now available.

Ask students to submit test questions. Faculty who use this technique limit the number of items a student can submit and receive credit for. Here is an example (adapted from Buchanan and Rogers, 1990, p. 72):

You can submit up to two questions per exam. Each question must be typed or legibly printed on a separate 5″ x 8″ card. The correct answer and the source (that is, page of the text, date of lecture, and so on) must be provided for each question. Questions can be of the short-answer, multiple-choice, or essay type.

Students receive a few points of additional credit for each question they submit that is judged appropriate. Not all students will take advantage of this opportunity. You can select or adapt student’s test items for the exam. If you have a large lecture class, tell your students that you might not review all items but will draw randomly from the pool until you have enough questions for the exam. (Sources: Buchanan and Rogers, 1990; Fuhrmann and Grasha, 1983)

Cull items from colleagues’ exams. Ask colleagues at other institutions for copies of their exams. Be careful, though, about using items from tests given by colleagues on your own campus. Some of your students may have previously seen those tests.

Consider making your tests cumulative. Cumulative tests require students to review material they have already studied, thus reinforcing what they have learned. Cumulative tests also give students a chance to integrate and synthesize course content. (Sources: Crooks, 1988; Jacobs and Chase, 1992; Svinicki, 1987)

Prepare clear instructions. Test your instructions by asking a colleague (or one of your graduate student instructors) to read them.

Include a few words of advice and encouragement on the exam. For example, give students advice on how much time to spend on each section or offer a hint at the beginning of an essay question or wish students good luck. (Source: “Exams: Alternative Ideas and Approaches,” 1989)

Put some easy items first. Place several questions all your students can answer near the beginning of the exam. Answering easier questions helps students overcome their nervousness and may help them feel confident that they can succeed on the exam. You can also use the first few questions to identify students in serious academic difficulty. (Source: Savitz, 1985)

Challenge your best students. Some instructors like to include at least one very difficult question — though not a trick question or a trivial one — to challenge the interest of the best students. They place that question at or near the end of the exam.

Try out the timing. No purpose is served by creating a test too long for even well-prepared students to finish and review before turning it in. As a rule of thumb, allow about one-half minute per item for true-false tests, one minute per item for multiple-choice tests, two minutes per short-answer requiring a few sentences, ten or fifteen minutes for a limited essay question, and about thirty minutes for a broader essay question. Allow another five or ten minutes for students to review their work, and factor in time to distribute and collect the tests. Another rule of thumb is to allow students about four times as long as it takes you (or a graduate student instructor) to complete the test. (Source: McKeachie, 1986)

Give some thought to the layout of the test. Use margins and line spacing that make the test easy to read. If items are worth different numbers of points, indicate the point value next to each item. Group similar types of items, such as all true-false questions, together. Keep in mind that the amount of space you leave for short-answer questions often signifies to the students the length of the answer expected of them. If students are to write on the exam rather than in a blue book, leave space at the top of each page for the student’s name (and section, if appropriate). If each page is identified, the exams can be separated so that each graduate student instructor can grade the same questions on every test paper, for courses that have GSIs.

References


Anderson, S. B. “The Role of the Teacher-Made Test in Higher Education.” In D. Bray and M. J. Blecher (eds.), Issues in Student Assessment. New Directions for Community Colleges, no. 59. San Francisco: Jossey-Bass, 1987.

Berrenberg, J. L., and Prosser, A. “The Create-a-Game Exam: A Method to Facilitate Student Interest and Learning.” Teaching of Psychology, 1991, 18(3), 167-169.

Bloom, B. S. (ed.). Taxonomy of Educational Objectives. Vol. 1: Cognitive Domain. New York: McKay, 1956.

Boniface, D. “Candidates’ Use of Notes and Textbooks During an Open Book Examination.” Educational Research, 1985, 27(3), 201-209.

Brown, I. W. “To Learn Is to Teach Is to Create the Final Exam.” College Teaching, 1991, 39(4), 150-153.

Buchanan, R. W., and Rogers, M. “Innovative Assessment in Large Classes.” College Teaching, 1990, 38(2), 69-73.

Clift, J. C., and Imrie, B. W. Assessing Students, Appraising Teaching. New York: Wiley, 1981.

Crooks, T. J. “The Impact of Classroom Evaluation Practices on Students.” Review of Educational Research, 1988, 58(4), 438-481.

Ericksen, S. C. “The Teacher-Made Test.” Memo to the Faculty, no. 35. Ann Arbor: Center for Research on Learning and Teaching, University of Michigan, 1969.

“Exams: Alternative Ideas and Approaches.” Teaching Professor, 1989, 3(8), 3-4.

Fuhrmann, B. S., and Grasha, A. F. A Practical Handbook for College Teachers. Boston: Little, Brown, 1983.

Geiger, T. “Test Partners: A Formula for Success.” Innovation Abstracts, 1991, 13 (l1). (Newsletter published by College of Education, University of Texas at Austin)

Gronlund, N. E., and Linn, R. Measurement and Evaluation in Teaching. (6th ed.) New York: Macmillan, 1990.

Hendrickson, A. D. “Cooperative Group Test-Taking.” Focus, 1990, 5(2), 6. (Publication of the Office of Educational Development Programs, University of Minnesota)

Jacobs, L. C., and Chase, C. I. Developing and Using Tests Effectively: A Guide for Faculty. San Francisco: Jossey-Bass, 1992.

Jedrey, C. M. “Grading and Evaluation.” In M. M. Gullette (ed.), The Art and Craft of Teaching. Cambridge, Mass.: Harvard University Press, 1984.

Keyworth, D. R. “The Group Exam.” Teaching Professor, 1989, 3(8), 5.

Liska, T., and Simonson, J. “Open-Text and Open-Note Exams.” Teaching Professor, 1991, 5(5), 1-2.

Lowman, J. Mastering the Techniques of Teaching. San Francisco: Jossey-Bass, 1984.

McKeachie, W. J. Teaching Tips. (8th ed.) Lexington, Mass.: Heath, 1986.

Milton, O., Pollio, H. R., and Eison, J. A. Making Sense of College Grades: Why the Grading System Does Not Work and What Can Be Done About It. San Francisco: Jossey-Bass, 1986.

Murray, J. P. “Better Testing for Better Learning.” College Teaching, 1990, 38(4), 148-152.

Savitz, F. “Effects of Easy Examination Questions Placed at the Beginning of Science Multiple-Choice Examinations.” Journal of Instructional Psychology, 1985, 12(l), 6-10.

Svinicki, M. D. “Comprehensive Finals.” Newsletter, 1987, 9(2), 1-2. (Publication of the Center for Teaching Effectiveness, University of Texas at Austin)

Svinicki, M. D., and Woodward, P. J. “Writing Higher-Level Objective Test Items.” In K. G. Lewis (ed.), Taming the Pedagogical Monster. Austin: Center for Teaching Effectiveness, University of Texas, 1982.

Toppins, A. D. “Teaching by Testing: A Group Consensus Approach.” College Teaching, 1989, 37(3), 96-99.

Wergin, J. F. “Basic Issues and Principles in Classroom Assessment.” In J. H. McMillan (ed.), Assessing Students’ Learning. New Directions for Teaching and Learning, no. 34. San Francisco: Jossey-Bass, 1988.

MID-SEMESTER SURVEY

Developed by the Honolulu Community College
Faculty Development Committee
January 2001

THESE QUESTIONS are OPEN ENDED – – you don’t have to answer every one, but if something comes to mind, fill in a response. There is no need to write your name on this survey.

I think it would help me if we did MORE:

The thing I like doing best/is most helpful is:

If there is one thing I could change about this course, it would be:

If there is one thing I would want the instructor to know it would be:

In this class I thought we were going to:

One thing I hope we have time to cover is:

In the last half, the thing I’d like MOST to concentrate on is:

In the last half, the thing I’d like LEAST to concentrate on is:

OTHER COMMENTS:

 

 

Related posts: