A variety of models have been provided by evaluation specialists, and a review of these models might offer helpful context for the procedure described in this article.
Bradley’s Effectiveness Model
How can the efficacy of a planned curriculum be measured and appraised? Ten essential indications are provided in Bradley’s Curriculum Leadership and Development Handbook (1985) that can be used to assess a designed curriculum’s efficacy. Exhibit 12.1’s chart is intended to assist you in determining how you feel about the 10 indicators used to evaluate the efficacy of the curriculum in your district or school. In the supplied column, indicate whether or not your school or district meets each of the indicators by selecting Yes or No.
The functioning characteristics required by any complex organization to be accountable and responsive to its clients are reflected in the indicators for good curriculum creation. The measurement can also be tailored to any size school district, from large to small, and it can be used to assess a particular curriculum area, such reading, language arts, arithmetic, or any other selected content area. Bradley’s effectiveness model is somewhat supported by the models (included below): Tyler’s objectives-centered model; Stufflebeam’s context, input, process, and product model; Screven’s goal-free model; Stake’s responsive model; and Eisner’s connoisseurship model.
Read: The use of observation, interview, and content analysis in qualitative research
Tyler’s Objectives-Centered Model
The curriculum evaluation approach presented by Ralph Tyler in his 1950 monograph Basic Principles of Curriculum and Instruction was among the first and is still used in many assessment initiatives. The Tyler technique, as outlined in this paper and applied in multiple extensive evaluation endeavors, proceeded logically and methodically through a number of connected steps:
1. Start with the previously established behavioral targets. These goals ought to outline the subject matter to be learned as well as the appropriate conduct from the students: “Show that you are knowledgeable about reliable resources for information on nutrition-related topics.”
2. Determine which circumstances will allow the learner to exhibit the behavior reflected in the goal and which will provoke or support this conduct. Therefore, find circumstances that cause oral language to be used if you want to evaluate oral language use.
3. Choose, alter, or create appropriate assessment tools, then ensure that they are valid, reliable, and impartial.
4. Use the instruments to obtain summarized or appraised results.
5. To determine the degree of change occurring, compare the data from various instruments before and after specified times.
6. Examine the outcomes to ascertain the curriculum’s advantages and disadvantages as well as potential causes for the specific pattern of strengths and weaknesses.
7. Utilize the findings to adjust the program as needed. (quoted on page 273 of Glatthorn, 1987).
The Tyler approach offers a number of benefits. It is not too difficult to comprehend and use. It makes sense and is organized. Instead of concentrating only on a student’s achievement, it draws attention to the strengths and shortcomings of the curriculum. It also highlights how crucial it is to have an ongoing cycle of evaluation, analysis, and development. Nevertheless, as Guba and Lincoln (1981) noted, it has a number of flaws.
Read: Research proposal and its different components. A guide to writing the research report
It makes no recommendations about how to assess the objectives themselves. It makes no recommendations about how standards should be created or give any standards. It appears to concentrate excessive focus on the pre- and post-assessments, completely ignoring the necessity for formative evaluation, and its concentration on the previous articulation of objectives may limit originality in curriculum building. Likewise, Baron and Boschee (1995) emphasize that “we are encountering fundamental changes in the way we view and conduct assessment in American schools” (p. 1) in their book Authentic Assessment: The Key to Unlocking Student Success.
Furthermore, “it has been sixty years since we underwent such a thorough and in-depth reevaluation of our assessment methods.”
Stufflebeam’s Context, Input, Process, Product Model
Several evaluation specialists criticized the Tyler model and proposed their own alternatives in the late 1960s and early 1970s as a result of the model’s evident flaws. The proposal made by a Phi Delta Kappa group led by Daniel Stufflebeam (1971) was the one that had the biggest impact. Because it placed a strong emphasis on gathering evaluative data for decision-making—the Phi Delta Kappa committee believed that decision-making was the only basis for evaluation—this model appeared to be appealing to educational leaders.
The Stufflebeam model meets the needs of decision makers by offering a way to generate data related to four stages of program operation: context evaluation, which helps decision makers set goals and objectives by continuously evaluating needs and problems in the context; input evaluation, which helps decision makers choose the best means of achieving those goals; process evaluation, which keeps an eye on the processes to make sure they are being carried out as intended and to make necessary modifications; and product evaluation, which compares actual ends with intended ends and makes decisions about recycling.
Read: The New Hazard of Global Warming: The Carbon Bomb of Arctic Permafrost
During each of these four stages, specific steps are taken:
• The kinds of decisions are identified.
• The kinds of data needed to make those decisions are identified.
• Those data are collected.
• The criteria for determining quality are established.
• The data are analyzed on the basis of those criteria.
• The needed information is provided to decision makers. (as cited in Glatthorn, 1987, pp. 273–274)
There are a number of appealing aspects of the context, input, process, and product (CIPP) model that have made it popular among people who are interested in curriculum evaluation. Its focus on making decisions seems right for administrators who are trying to make curriculum better. Its attention on evaluation’s formative elements corrects a significant flaw in the Tyler model. Lastly, the committee’s comprehensive forms and guidelines offer users step-by-step instructions. There are, however, a number of significant shortcomings with the CIPP paradigm. Its primary flaw appears to be its inability to acknowledge the complexity of organizational decision-making processes. It overlooks the political aspects that heavily influence these choices and implies a level of rationality that is not present in these circumstances.
Scriven’s Goal-Free Model
The notion that goals or objectives are essential to the evaluation process was first questioned by Michael Scriven in 1972. He started to doubt the seemingly arbitrary distinction between planned and unintended outcomes after participating in multiple evaluation initiatives where the so-called side effects appeared to be more important than the initial goals. This discontent led him to develop his goal-free model. A goal-free evaluation involves the evaluator acting as an impartial observer who starts by creating a needs profile for the population that a particular program serves (although, Scriven is not quite clear on how this needs profile is to be produced). Subsequently, the evaluator evaluates the program’s real impact using mostly qualitative methods.
The primary contribution of Scriven was, of course, to draw administrators’ and evaluators’ attention to the significance of unanticipated impacts; this is an important lesson in education, it seems. It is impossible to declare a mathematics program fully successful if its goal of enhancing computational skills is met but the unexpected consequence is a decline in student interest in the subject. Additionally, Scriven’s focus on qualitative approaches seems to have been timely given the growing discontent in the research community with the predominance of quantitative methodology. However, as Scriven points out, goal-free evaluation need to be utilized in addition to goal-based assessments, not in place of them. It cannot give the decision-maker enough information when used alone.
The goal-free model has been criticized by some for not giving more clear instructions for creating and applying it; as a result, specialists who do not need clear help in determining needs and effects are likely the only ones who can utilize it.
Stake’s Responsive Model
The development of the responsive model by Robert Stake (1975) significantly advanced curriculum evaluation because it explicitly relies on the premise that stakeholders’ concerns—those for whom the evaluation is conducted—should be given top priority when identifying the evaluation’s issues. He put his argument in this way:
I suggest using the responsive evaluation approach to highlight evaluation issues that are significant for each unique program. It’s a method that increases the utility of the findings for those in and around the program by trading off some measurement precision. If an educational assessment reacts to the information needs of the audience, orients itself more toward program activities than program intents, and mentions the various value perspectives in the program’s success and failure, then it is considered responsive.
Read: Analysis of reforms in Pakistani secondary education curriculum: research proposal
Stake recommends an interactive and recursive evaluation process that embodies these steps:
• The evaluator meets with clients, staff, and audiences to gain a sense of their perspectives on and intentions regarding the evaluation.
• The evaluator draws on such discussions and the analysis of any documents to determine the scope of the evaluation project.
• The evaluator observes the program closely to get a sense of its operation and to note any unintended deviations from announced intents.
• The evaluator discovers the stated and real purposes of the project and the concerns that various audiences have about it and the evaluation.
• The evaluator identifies the issues and problems with which the evaluation should be concerned. For each issue and problem, the evaluator develops an evaluation design, specifying the kinds of data needed.
• The evaluator selects the means needed to acquire the data desired. Most often, the means will be human observers or judges.
• The evaluator implements the data-collection procedures.
• The evaluator organizes the information into themes and prepares “portrayals” that communicate in natural ways the thematic reports. The portrayals may involve videotapes, artifacts, case studies, or other “faithful representations.”
• By again being sensitive to the concerns of the stakeholders, the evaluator decides which audiences require which reports and chooses formats most appropriate for given audiences. (as cited by Glatthorn, 1987, pp. 275–276)
The responsive model’s primary benefit is undoubtedly its client-sensitivity. If applied properly, the methodology should produce assessments that are highly valuable to clients by acknowledging their concerns, being mindful of their values, incorporating them closely throughout the evaluation, and customizing the report format to suit their requirements. Another benefit of the responsive approach is its flexibility. After identifying the client’s concerns, the evaluator can select from a number of different methodologies. Its primary flaw appears to be its vulnerability to manipulation by clients, who, in venting their worries, may try to deflect attention from vulnerabilities they would prefer not to be made public.
Eisner’s Connoisseurship Model
Elliot Eisner (1979) developed the “connoisseurship” paradigm, an evaluation method that prioritizes qualitative appreciation, drawing on his experience in aesthetics and art education. Connoisseurship and critique are the two fundamental concepts on which the Eisner model is based. According to Eisner, connoisseurship is the art of appreciation; it involves identifying and valuing through perceptual memory and drawing on experience to recognize and value what is meaningful. It is the capacity to both recognize the specifics of the educational experience and comprehend how those specifics fit into the overall design of the classroom. According to Eisner, criticism is the art of revealing attributes of a thing that expertise notices. The educational critic is more likely to employ metaphorical, connotative, and symbolic language in such a disclosure, which Eisner refers to as “nondiscursive.”
Read: The significance of samples in research. Discuss various sampling techniques
According to Eisner’s definition, there are three parts to educational critique. The descriptive part of the essay aims to describe and illustrate the salient features of school life, such as its norms, patterns, and fundamental structure. In order to investigate meanings and generate other theories—that is, to explain social phenomena—the interpretative component draws on concepts from the social sciences. The evaluative component performs assessments to enhance the educational procedures and offers justification for the decisions taken regarding values so that others may be more likely to disagree.
The main contribution of the Eisner model is its significant departure from the conventional scientific models and its delivery of an alternative conception of evaluation. By drawing on a rich heritage of artistic critique, the evaluator’s viewpoint is expanded and his or her toolkit is enhanced. Its detractors have criticized it for lacking methodological rigor, a claim Eisner has made an effort to deny. Furthermore, detractors have pointed out that applying the concept necessitates a high level of skill due to the phrase “connoisseurship,” which seems to imply elitism.
Read our blog: AABMS Blogger