Evaluating Professional Development Programs
In an era of standards and accountability in education, professional development for school leaders is more important than ever. Yet, ample resources devoted to the professional development of principals are far from guaranteed. Schools and school districts must make the most of time and money devoted to developing school leaders. Equally important, a consensus has emerged that the primary focus of professional development for principals and teachers should be improvement of student learning. (Collins 2000; Guskey 1999 & 2002)
"Although the symptoms of failure to ensure quality in educational administration are not as visible as the failure of a civil engineer's bridge design that spills hundreds of cars into an icy river, the impact of good leadership on the river of children who flow through each building can be equally dramatic." (Christie)
This section is intended to help your school district to "become a better-informed consumer of professional development" (Collins) and to avoid "popular innovations that are more opinion-based than research-based, promoted by people more concerned with 'what sells' than with 'what works.'" (Guskey 2002) Evaluation can help in choosing the right program, adapting it to your local context, and to fine tune the program once in use.
Unfortunately, program evaluation "is not part of the culture of most organizations" and too often program choices "are based on hunches and power struggles and pet interests of key people rather than evaluation data." (Champion) Yet, Purposeful evaluation before, during, and after a professional development program is initiated is essential for determining if the program is worthwhile for your district. While many in our schools find the prospect of evaluation overwhelming or better left to experts, much of program evaluation need not be expensive or complicated (Guskey 2002; Champion)
Three Stages of Evaluation—Before, During, and After
Both formative evaluation, employed prior to and during a professional development experience, and summative evaluation, used afterward to assess results, are valuable. (Champion; Collins) Formative evaluation is "ongoing" while summative evaluation is "summarizing or culminating." (Champion) Formative evaluation is meant to "assess initial and ongoing project activities" while summative evaluation is used to "assess the quality and impact of a fully implemented project." (NSF)
Before adopting a professional development program, the district needs to know whether the program is geared toward what the district wants to accomplish, if it fits the local context, and whether the program is of high quality. The district should be clear about its goals so that it can choose a relevant and worthy program and so as to develop "indicators of successful learning" which should be "outlined before activities begin." (Guskey 2002)
David Collins asserts the content of a professional development program "should be proven to produce gains in student learning with students similar to those in your school." In fact a whole host of contextual factors are important, from the level of teacher knowledge to the type of resources and support provided by administration. (Guskey, 2002; Collins) Collins stresses the "difference between programs that have been proven effective in well-designed studies and those that 'should be effective' because the authors studied relevant educational research prior to developing their program."
As for program quality, the National Staff Development Council (NSDC) and the Interstate School Leaders Licensure Consortium (ISLLC) have developed standards for high-quality staff development programs [click here to read more about these standards]. (Joellen Killion of NSDC has authored "Assessing Impact: Evaluating Staff Development," a guide designed to "assist schools, district-level staff development leaders, and other program coordinators to plan and conduct evaluations of their staff development programs." NSDC has offered workshops on this material as well.) Collins (2000) indicates program quality "refers to how the program's design compares with what you know about effective professional development." He offers some questions for assessing program quality:
- Which model of professional development was used to design the program?
- Is this model appropriate for the intended outcomes?
- Are all elements of the model included?
- Does the program's design include inquiry into how learning can be improved?
- Does the program's design have a problem-solving focus?
Though absolute proof of a staff development program's effect is elusive, "you can collect good evidence about whether a professional development program has contributed to specific gains in student learning" especially if "you know what you're looking for before you begin." (Guskey 2002) Again, this calls for clarifying goals and establishing indicators before initiating the program and monitoring process on those indicators as you go along.
Because the "purpose of professional development is to produce a desirable or intended change" at least three types of outcomes are important-"1. changes in participants, 2. changes in the organization, and 3. changes in students." (Collins) This boils down to setting up a system that captures evidence suggesting whether or not the program has led to the desired effect (especially regarding student learning).
Champion (2000) tells us that formative evaluation "is never a substitute for summative evaluation" and summative evaluation "requires more formal procedures, sometimes using an experimental design but more often using a descriptive approach."
These evaluation studies usually summarize several kinds of evidence about program outcomes. They inform stakeholders of the extent to which a program achieved its intended outcomes. The data must be able to withstand scrutiny because they are often used to help make high stakes decisions, such as whether to continue, abolish, expand, or re-invent a program by people who are not staff developers. (Champion)
The National Science Foundation suggests six phases which apply both to formative and summative evaluation:
- Develop a conceptual model of the program and identify key evaluation points
- Develop evaluation questions and define measurable outcomes
- Develop an evaluation design
- Collect data
- Analyze data
- Provide information to interested audiences (NSF)
Another Cut—"Working Backwards" From Five Levels of Information
Thomas Guskey offers a compelling process for determining the best fit of a professional development program for one's local context. The approach first requires recognition, "collection and analysis of the five critical levels of information." (Guskey 2002) Each successive level of evaluation is more complex than the level before and while "success at an early level may be necessary for positive results at the next higher one, it's clearly not sufficient." Levels progress from formative (especially levels one and two) to summative evaluation (especially levels four and five).
- Participants' Reactions
While often dismissed as unimportant, "measuring participants' initial satisfaction with the experience can help you improve the design and delivery of programs or activities in valid ways." Things to consider at this level are "basic human needs" such as the quality of food and comfort of the room, and whether participants "liked" the experience, whether the materials and presentation "make sense" and whether presenters seem "knowledgeable and helpful." A brief follow-up questionnaire for participants is commonly used for this.
- Participants' Learning
This level "focuses on measuring the knowledge and skills that participants gained." Measures should be used to "show attainment of specific learning goals." Guskey warns against using merely a "standardized form" and urges instead "that indicators of successful learning" should be designed to fit specific local needs. Evaluation results can help with "improving the content, format, and organization of the program or activities." Participant's learning can be demonstrated in writing, through simulations, "full-scale skill demonstration," or other means.
- Organization Support and Change
Evaluation at this level is meant to determine if "organization policies…undermine implementation efforts" or support them. Questions such as "Did the professional development activities promote changes that were aligned with the mission of the school and district? Were changes at the individual level encouraged and supported at all levels? Were sufficient resources made available, including time for sharing and reflection?" are addressed at this level. Interviews, document reviews, and questionnaires can all be used in this effort.
- Participants' Use of New Knowledge and Skills
Whether or not "new knowledge and skills that participants learned make a difference in their professional practice" is the focus of evaluation at this level. This analysis should be based upon pre-determined "clear indicators of both the degree and the quality of implementation." Questionnaires, interviews, and direct observations (" kept as unobtrusive as possible") are useful. Evaluation should occur after a good amount of time has passed since the professional development session and at multiple time intervals thereafter.
- Student Learning Outcomes
This "bottom line" level of analysis seeks out the effect on student learning from a professional development experience. Evaluations at this level "should always include multiple measures of student learning." Evaluation results must capture not only outcomes related to the specific goals of the professional development effort, but also "important unintended outcomes," be they positive or negative.
The real power of thinking in terms of these levels is engaging them in the planning of professional development. Guskey (2002) describes the process for "working backwards" from "the student learning outcomes that you want to achieve (Level 5)" and through each successive level to "what set of experiences will enable participants to acquire the needed knowledge and skills (Level 1)."
Structuring the Evaluative Process
The W. K. Kellogg Foundation recommends a nine-step process for evaluation in the "W.K. Kellogg Foundation Evaluation Handbook." While not geared specifically to education, its Steps are instructive:
- Planning: Preparing for an Evaluation
- Identifying Stakeholders and Establishing an Evaluation Team
- Developing Evaluation Questions
- Budgeting for an Evaluation
- Selecting an Evaluator
- Implementation: Designing and Conducting an Evaluation
- Determining Data-Collection Methods
- Collecting Data
- Analyzing and Interpreting Data
- Utilization: Communicating Findings and Utilizing Results
- Communicating Findings and Insights
- Utilizing the Process and Results of Evaluation
Similarly, in "Achieving Your Vision of Professional Development," Collins offers a ten-step "Self-Directed Change Model" that the professional development team in your district or within a school may wish to consider. Following is an abridged outline of the process:
- Identify practices to be studied
- Identify standards or criteria for judging targeted practices
- Identify methods for collecting information about targeted practices
- Collect information
- Compare real practices with standards or criteria for ideal practices
- Identify priority areas for more in-depth study and professional growth
- Identify the desired outcomes of the professional development activities
- Plan the professional development activities, including follow-up activities, that will address the targeted practices
- Implement the plan; assess and monitor its progress periodically
- Use feedback to determine the extent to which the professional development activities achieved the desired outcomes; continue or modify the activities as necessary, or identify new practices for study
While at first blush it may seem that only step 7 of the Kellogg Foundation process and step 9 of this process, which includes "assess and monitor…progress" have to do with evaluation, looking more closely it becomes clear that evaluation cannot exist, or is of little value, without the other elements of the process. Without assembling the right people, identifying a focus, setting targets, collecting data in a purposeful manner, having a realistic budget and so on, there is no basis for assessment and monitoring. Note that the final step in each process model embodies the notion of summative evaluation. It is also evident that steps in such a process are dependent upon one another. For example, Collins notes that setting "baseline data describing the need or condition prior to the professional development activities, as described in Step 4, is an important part of assessing the impact of those activities [Step 10]".
In short, whether you look at professional development through the lens of stages, levels, or steps in a process, initiating and maintaining a thoughtful and integrated approach from start to finish is critical in structuring and carrying out professional development evaluation that is useful to practitioners and decision makers as well.
No comments:
Post a Comment