Assessment is a natural component of outcomes-based instruction. Once a department has articulated learning goals for majors, it can design ways to evaluate whether students have met them. There are many ways of assessing whether students have met learning goals, and usually the best method is the method that fits both the disciplinary context and the learning goals for the majors. Several majors at the UW uses an experiential capstone that calls on students' knowledge of the discipline, as well as on their abilities to work in teams (links to be added), to develop creative solutions to problems. Another major at the UW collects portfolios of student work and evaluates them (links to be added). Some majors require a performance of some kind that demonstrates students' understanding of concepts, methodology, and critical thinking in the field (links to be added).There is no single way to assess student learning. But student learning outcomes make effective assessment possible, because they give us values, knowledge, abilities, and skills to look for in students' work.Some outcomes are easier to assess than others; in fact, often those outcomes that are difficult to measure are the most important to educators. Therefore, it is important that departments include all outcomes that matter to them, whether or not they can be easily assessed.
A General Framework for Assessment
The principles of good assessment practice set forth by the American Association for Higher Education provide faculty with an excellent framework for thinking about assessment. These principles are:
1.The assessment of student learning begins with educational values.
2.Assessment is most effective when it reflects an understanding of learning as multidimensional, integrated, and revealed in performance over time. i.e.:
3.Assessment works best when the programs it seeks to improve have clear, explicitly stated purposes.
4.Assessment requires attention to outcomes but also and equally to the experiences that lead to those outcomes.
5.Assessment works best when it is ongoing, not episodic.
6.Assessment fosters wider improvement when representatives from across the educational community are involved.
7.Assessment makes a difference when it begins with issues of use and illuminates questions that people really care about.
8.Assessment is most likely to lead to improvement when it is part of a larger set of conditions that promote change.
9.Through assessment, educators meet responsibilities to students and to the public.
Measuring Outcomes
10.Most student learning outcomes should be potentially assessable. Some will be much harder to assess than others. Furthermore, we will have confidence in assessing some outcomes with a single measurement; others by their nature will cry out for multiple measures. It is almost inevitable that the most important outcomes are also the hardest to measure and often require multiple measures.
When developing SLOs, one reasonable test for the value of the specification of an outcome is whether attainment of this outcome could be measured. A less reasonable test is, can we measure it now? Being able to measure it now is a bonus. For example, an outcome could be, "students will be able to function effectively as team members to solve a significant problem in the field." Assessing whether one can function effectively as a team member qualifies as a difficult but not impossible outcome to assess. However, if there are no opportunities in the curriculum for the student to demonstrate that ability, the department could not assess its attainment. For this example, the problem lies in the curriculum, not in the outcome itself. This example also illustrates how outcomes specification alone may suggest needed changes in the curriculum.
An example of an unmeasurable outcome might be: "The student will appreciate the great art of the world." To make this outcome measurable, faculty would have to better delineate what is meant by appreciation and by the great art of the world. This example leads us to an important point. One might assume on the surface that faculty conversation about the contextual meaning of these terms would be done in the service of defining student learning outcomes and would have little value beyond this. In fact, this process is much more in service of understanding the discipline and how it might and should affect students. Ending with a potentially measurable outcome may be the least important result.
Ways of Assessing Learning Outcomes - General Considerations
There are essentially two ways to assess learning outcomes. The easier way is by asking students, themselves, to assess their own learning. When not included as part of their grades, student self-assessment can be effective at both course and program levels. For example, seniors in exit surveys can assess programmatic outcomes. Alumni can be surveyed for the same purpose. Many UW departments already survey or interview seniors as a means of assessing their curriculum and services. In addition to departmental surveys, the Office of Educational Assessment (OEA) conducts annual surveys of seniors and biennial surveys of alumni one, five, and 10 years after graduation, but these surveys are probably too general in content to be of much help. However, OEA has produced two reports that summarize department end of program assessment efforts. These reports are Assessment in the Majors, 2000 . OEA has also compiled a chart of assessment methods used by programs at the UW.
There is value in student perceptions, but they also are likely to present an incomplete picture for most outcomes. Students are sometimes not very good judges of their own abilities. In addition, they may not understand what important aspects of a discipline they may be missing. This reality is particularly evident if the outcome is at a higher cognitive level than that at which the student is operating.
For these reasons, a complete assessment of outcomes requires the second major class of assessment activity -- expert judgement of student products. This is hardly a new idea. Indeed, it forms the basis of how we grade courses. At the course level, development of SLOs allows a closer alignment between these outcomes and the basis upon which students are graded.
Assessment of programmatic outcomes adds some interesting new wrinkles. Essentially, two elements are needed: student products that allow the inference of specific competencies or lack thereof, and experts who can view these products and assess the competence shown. Student products, be they test responses, papers, or major projects, etc., are best assigned and collected in the context of an actual graded class. Otherwise, problems in student motivation render the data suspect. (Does the student really know this little or did he/she put no effort into it?)
Two questions will arise. First, does the current curriculum and course assignments provide the needed student products? In other words, are students getting regular opportunities to demonstrate ability in the outcome areas? Often, the type of work we need for analyses comes from capstone courses that ask students to integrate and use their learning in the major. The need to assess these outcomes may indicate a need for different assignments, tests, or coursework.
The second question is, who should judge the extent to which outcomes are being attained? Often the instructors of specific classes may be appropriate. In other cases, the perspective of a single faculty member in a single course may be too narrow. Rather, it might be appropriate to bring a faculty committee together to read a sample of student papers, or perhaps a collection of student portfolios, reflecting work over several courses or the entire major program. These committees can include members of the community, alumni, and/or faculty from other institutions to further broaden the perspective.
Ways of Assessing Learning Outcomes - Specific Techniques
A number of techniques that might be useful for assessing student learning outcomes are listed below with brief explanation. If you have questions or desire clarification,.Secondary analyses of course papers. The instructor or TA will read student papers in order to assign a grade. If expected course-level student outcomes, the assignment, and grading criteria are aligned, then this reading also will provide a measure of SLO attainment in the course. Faculty committees can also read these same papers to assess the attainment of program-level SLOs. In most cases, this second reading should be done by other than the instructor or by others along with the instructor, as the purpose for the assessment is different. Scoring rubrics for the papers, based on the relevant SLOs can be developed in advance or as the papers are being read. One such rubric was developed by a statewide group of faculty in service of the Statewide Senior Writing Study. (See Table 1 at the bottom for the Criteria for Effective Writing.)
Secondary analyses of course projects. Products other than papers can also be assessed for attainment of program-level SLOs. For example, if students are required to give oral presentations, other faculty and even area professionals can be invited to these presentations and can serve as outside evaluators.
Capstone courses. Capstone courses provide a wonderful occasion for obtaining data on student learning. This is simply because the capstone course is the place where students are most likely to exhibit their cumulative understanding and competence in the discipline. Indeed, the purpose of many capstones courses is just that - providing an opportunity for students to "put it together". Products of capstone courses should be, by their very nature, places where students demonstrate understandings and abilities articulated in the department's SLOs.
Student portfolios. Having students collect all or some of the work they have done in a major will provide a much richer and well-rounded view of student learning than single documents can provide. These portfolios become valuable for programmatic assessment, but they are valuable for the student as well. The richness of portfolios is also their Achilles' heel. The amount of data can be overwhelming and specific ways to view them need to be developed.
Videotapes of performances. In some areas, such as drama or music performance, analyses of videotapes of performances may be a useful tool. These videotapes are particularly useful if they include a student's early and later performances.
Examinations. Many course-level SLOs can be assessed by examinations given within the course. In some cases these SLOs will be identical to those at the programmatic level and, thus, the exam questions will cover both. With some creativity, exam questions can also be written to cover broader programmatic SLOs without losing their validity for course grading. In departments without a capstone courses, it might be a possible to write a coordinated set of exam questions that provide a fuller picture when administered across courses.
Standardized and certification exams. In some disciplines, national standardized or certification exams exist that might be useful. However, it is important to note that these exams will be useful only so far as they reflect the department's SLOs. If, for example, an important goal is to prepare students for entry into a profession that requires passing a certification exam, then students' performance on such an exam is very relevant. If, on the other hand, a national standardized test does not embody the department's particular goals, no faculty will take its results seriously.
Exit interviews or surveys of seniors. Students' self-assessment of their learning can be valuable for the student and for the program. Feedback should be anonymously given if at all possible.
Surveys of alumni. Alumni have the added perspective of the workplace or further education. It is a perspective well-worth tapping.
Surveys of employers. If the program is preparing students for a particular set of jobs, it might be worthwhile to survey employers regarding the students' on-the-job performance. However, it is important to survey those who would have first-hand knowledge of particular students rather than relying on general opinions or stereotypes.
Internship evaluations. If the department has a number of students who are doing relevant internships or other work-based learning, standard evaluations by sponsors may provide data on attainment of SLOs. In addition, when departments exercise control over the content of internships, those settings themselves can serve as capstone experiences where students can demonstrate their learning.
Enduring Problems
This entire discussion has begged some important questions that can be given only superficial treatment here. First, there is the issue of reliability or consistency. How much can we expect our measures to vary by random factors alone, and how can we avoid drawing conclusions that are largely spurious? Second, there is the issue of validity. How can we know that the interpretations we make of data deriving from our measures is appropriate? For example, suppose we conclude from reading a paper that the student is unable to apply an important theory to a real-life phenomenon. Could it be that the student did not display the ability because he/she interpreted the assignment as not calling for it? Finally, there is the issue of standards. What level of performance by what percent of the students is adequate? At what point do we decide that there is a problem? What if the solutions to the problems are beyond budgetary means? There are no hard and fast answers to these questions to be found anywhere, and no substitute for informed, expert judgement.
Wednesday, January 11, 2006
Assessing Student's Learning Outcomes
Labels:
Creative Thinking,
Development,
eLearning,
HR,
Management,
Motivation,
Performance,
Teaching,
تنمية بشرية