Institutional evaluations have been described as "processes which use concepts and methods from the social and behavioral sciences to assess organizations' current practices and find ways to increase their effectiveness and efficiency" (Universalia 1993).
The social science constructs used by IDRC to conceptualize the complex processes of institutional growth and development are "institutional capacity development," "institutional strengthening," and "institutional performance." As discussed in Chapter 1, it is essential for IDRC to learn what areas of an institution to invest in (institutional strengthening/capacity development) and the returns from these investments that can be expected (institutional performance).
For IDRC's purposes, institutional assessments should be conducted as learning exercises for both donor and recipient institutions. They should be designed to diagnose areas of need so as to guide capacity building efforts. In the best sense, an evaluation serves as a reforming process, seeking ways to make the institution stronger and better.
A learning model of evaluation goes beyond the summative approach which measures the total impact of an organization's programs, products, and services. IDRC's approach ideally integrates these results with the techniques of formative evaluation, in which evaluators become involved with helping the organization become more effective in meeting its goals. Beyond merely observing and collecting data, IDRC would like to work alongside people in Southern partner institutions, learning with them how best to influence the development and performance of the organization.
To have meaning and credibility for the Southern organization, the process of developing an organizational profile should be conducted in partnership with individuals having intimate, day-to-day knowledge of the institution, particularly those in a position to act on the evaluation results. By evaluating in partnership, the means to understand and strengthen the institution can spring from practical realities and experience. Moreover, those working inside the institution stand to benefit from self-examination. Undergoing assessment can serve as an organizational stimulant.
Institutions are normative structures. They are grounded in societies and thus can hardly be understood outside of their contexts. For this reason there can be no specific blueprint for conducting institutional evaluations nor for knowing ahead of time all of the issues that bear on institutional functioning. And since institutions are socially constructed, complex systems, neither the means nor the ends of the evaluation process can be fully known prior to implementation.
An evaluation methodology that relies on predetermined instrumentation assumes that the social reality of an institution functions independently of the various environments and stakeholder groups, and yet these forces undoubtedly have a formative influence on institutional performance.
Just as IDRC's personnel must go through considerable learning to know how to work with and relate to certain institutions, so IDRC must be supportive of the knowledge development process inherent in conducting each institutional evaluation, for the process as well as the outcomes will likely be in flux. Institutional assessments require experimentation and the continuous correction and adaptation of plans to keep pace with institutional complexity. IDRC's own organizational culture indeed supports such a learning process approach.
There are many good texts on project and program evaluation, not to mention research methodologies and ways to ensure reliability and validity of data. We do not want to attempt to duplicate that work here without the space to do it justice, so we have annexed a short bibliography of useful sources. These are important subjects, however, and form the foundation of sound institutional evaluations. Thus, while we have incorporated fundamental concepts in this text, we suggest that you look more carefully at the background sources.
There is a strong temptation, when engaging in institutional evaluations, to over-generalize the issues ("all organizations should...") or to apply, blanket-style, the latest prescriptions of the day (Don't all institutions need programs in "Total Quality Management?"). But by nature, each institution is unique, grounded in a particular history and housing a distinctive culture. Each institution's mission is unlike that of any other institution and is designed to serve complex and unique stakeholder needs. Circumstances and needs evolve continuously, thus institutions are never static entities.
The uniqueness of an institution does not of itself defeat or invalidate generalization. It does, however, necessitate the carrying out of analytical groundwork so that a proper understanding of the mission, culture, and context will become a lens through which performance is viewed. The ideas and concepts dealt with in each institutional evaluation should flow from and reflect the institution's own ideas and its approach to these ideas indeed the institution's own way of knowing about itself.
The various conceptual frameworks in use for evaluating organizations suggest diverse issues to explore in the course of evaluations. While the names of categories or areas differ slightly, many models share similar content, with some more comprehensive than others. At the close of this section we will propose a framework developed specifically by IDRC's Evaluation Unit for profiling organizations. The framework notwithstanding, it is important to reiterate that the issues inherent in each institutional profile must be institution specific, and their examination must be negotiated with key insiders so as to meet the needs of end users. Also, choices of issues must be congruent with the limitations of the evaluators' resources and interest, i.e. examining the whole institution may be unfeasible.
For example, measuring the performance of a research institution is a central issue, but little agreement exists as to the meaning of performance or its measurement. Thus we need to develop the precise meaning of good performance for each institution. Fortunately, there are generally accepted constructs (such as effectiveness and efficiency) that can be used as a basis for determining institutional performance. However, specific criteria cannot be determined a priori but must be negotiated for example, the relative importance of papers published in peer journals, the number of research grants, per unit costs, client satisfaction, the amount of contractual research conducted for clients, the number of patents produced, the amount of external support garnered, the success of those trained at the institution, and so on. Beyond performance issues, organizational capacity issues are similarly diverse and complex.
Finally, institutional issues to be explored are subject to shaping by the data that are available. The lack of valid data can be a constraint to evaluation, and making up data deficits can be an expensive process.
Because of the complexity of the concepts and issues being discussed and the inherent interest of researchers in questions related to research design, design is an important issue. Institutional evaluations lend themselves to many of the most recent advances in methodologies from the social sciences, management and economics. They are less well served by experimental or quasiexperimental designs. The most useful designs are descriptive and analytic, incorporating elements of historical time series analysis, case study methodology, and frequently comparative analysis. They attempt to foster in-depth understanding based on a solid foundation of descriptive data. The challenge is often in data interpretation which can only be fruitful when people believe in the data themselves.
The agents of data collection in the evaluation process are generally (1) peer review, (2) self-study, and (3) external experts. For evaluating research quality, peer review is widely considered the best method. Self-study is a methodology growing in popularity, particularly in the nongovernmental organization (NGO) community. Recent work in Canada using on-site analysis has provided both a method and methodology to support institutional self-study. When both these approaches are augmented by the evaluative expertise of outside consultants, the combination can provide a rigour of design and methodology that strengthens and adds objectivity to the exercise.
Evaluation on the basis of experts' assessments is currently the most common method used by higher education and research centres, however, it is often not the most effective method for assessing a whole institution in all its complexity. Experts are defined as independent and distinguished peers of the same profession, or administrators who examine an institution or unit with the help of documents and possibly a prior internal report and undertake on-site visits. Faults of this approach are that it tends to be overly selective in the issues examined and often ignores what the science of institutional evaluation can contribute. In some fields, accreditation standards and procedures that rely on visiting panels of outside experts provide thorough and valid institutional analyses.
Both quantitative and qualitative data are normally utilized in institutional evaluations, depending on the issues being explored. Sources can be both internal and external to the institution. A combination of qualitative and quantitative data is important, for unless tempered by other measures, quantitative measures considered in isolation can erode confidence in the evaluation process. By weaving qualitative with quantitative information, a deeper understanding of the institution will be achieved.
Certain quantitative indicators currently in vogue are justifiably criticized because they merely skim the surface of performance and are subject to overinterpretation. One example is the practice of counting the number of research papers published as a means of judging output, without considering their influence (as revealed in citation indexes) or their timing or relevance (i.e. the point of career of the researcher or the developmental progress of a new research group).
Quantitative data are important, however. These take many forms, ranging from counts and other descriptive statistics to ratio variables such as measures of unit cost or productivity. All such data should conform to the best available standards of reliability and validity.
Qualitative data has many forms and diverse sources. These include observational records of the research setting and its ambience, data from interviews and group discussions, and written data ranging from letters of clients to formal questionnaires and inventories on the organizational culture. These forms of data can be gleaned from individuals inside the institution as well as from peers and clients external to it.
One of the most difficult aspects of an evaluation is making judgments about the data, i.e. whether performance is "good." In general, the organization must decide what types of performance should be measured and what standards are acceptable in their environment. Investors must ultimately decide whether or not the levels of performance that exist (or are potential) are worth the level of investment.
Since there are at least two main institutional interests involved in the institutional evaluation process (IDRC's and the organization's) and possibly others, the probability exists that many interpretations could arise from the same data. Therefore, it is important to take these potential differences of interpretation into account at the design stage.
In general, judgments about data are made by using four main decision-making tools: (1) benchmarking (using best practices to compare data), (2) reliance on experts' opinions, (3) criterion measures (deviation from specific, stated goals and objectives), and (4) measurement of statistical differences (often with the use of tests of statistical significance). Using one or more of these tools, evaluators North and South must interpret the evaluation data collected.
It is ultimately the organization's responsibility to accept or reject the analysis and judgments and decide whether to commit to making organizational change. IDRC must interpret and react to the data and the institutional response to the data in light of its own institutional objectives.
Institutional assessments typically generate an array of complex information, all of which potentially contributes to understanding the performance and developmental progress of an organization. Clearly, the data must be contextualized and the limitations of both data and process acknowledged.
Data considered in isolation of context can be misleading. For proper interpretation, many results need to be placed into social, political, economic, and historical perspective and screened through the institutional lens. For instance, new institutions differ from more venerable ones in that their normative structures are not yet integrated into the national, regional, or local cultural systems. Some institutions are local in scope rather than international and should be assessed from this perspective. All institutions, whether local, regional, national, or international, will need to have their stage of development considered (as will subunits within the institution), for given the nature of the research endeavour, it undoubtedly takes time to generate positive results.
The expense of a full-blown institutional evaluation is a major issue. Collecting valid evaluation data entails a comprehensive process that can be difficult, time-consuming, and costly. Without such data, institutions must rely on the perceptions of experts, and the credibility of external people can become a focal issue. A large number of trade-off decisions need to be made by IDRC, the research institution, and other partners in the evaluation. Expectations need to match the scope of the exercise. Trade-off decisions need to be explained if they materially affect the validity or reliability of the data; limitations should be clearly identified.
IDRC's Evaluation Unit has constructed a framework to help IDRC personnel achieve greater understanding of organizations funded by the Centre. Following this approach will help clarify important issues and guide the collection of data that will inform decisions about enhancing institutional performance and capacity. In brief, the framework encompasses the following areas, each of which will be discussed in forthcoming chapters:
Key forces in the environment which have a bearing on the institution's performance must be understood. These could include the host country's science/technology policy, the level (or lack) of basic infrastructure services such as electricity and water, or pressing social problems in the country which shape action research. The strategic environment is dealt with in Chapter 3.
Donors are interested in seeing the clear-cut results of their investments. Thus, their natural tendency is to intersect an organization at the level of "performance," made visible through products, programs, and services. But before assessing an institution's outputs, it is first necessary to gain an understanding of institutional motivation: its mission and goals, and insofar as possible, its culture and organizational incentives. These drive performance from within, and a performance assessment must address how well the organization is fulfilling its mission. Institutional motivation is discussed in Chapter 4, in which key concepts and potential indicators for use by IDRC are suggested.
For those wishing to examine the key components of institutional capacity which underlie performance, the complex area of organizational capacity is covered in Chapter 5. Six main areas of institutional capacity are detailed (strategic leadership, human resources, other core resources, program management, process management, and interinstitutional linkages) and components within each of these areas are discussed.
Performance is seen in the visible outputs of the research institution, namely its research and training products and services. Our framework asserts that performance is a function of the interplay of an institution's unique motivation, its organizational capacity, and forces in the external environment.
Ways to approach performance are discussed in Chapter 6. Guides for conducting selected aspects of institutional evaluation have been described in a series of companion documents derived from this framework. They can help delineate approaches for organizational assessments lasting one to two days as well as for large-scale assessments.
|Exhibit 2.1: Framework for assessing research institutions.|
For the institutional profiling process to become a learning experience for all parties, it is necessary for the key players to create and agree upon an appropriate model at the outset. Components of the profiling process include creating partnerships, developing terms of reference, utilizing a workplan, participating in data collection and analysis, obtaining evaluation feedback, validating the results, and developing action plans. Each is discussed below.
Partners in an organizational assessment initiated by IDRC are, of course, the Centre and the particular research organization. Additional partners might include other interested donors or granting organizations in fact, any legitimate participant with a stake in the process, including those who might help fund it.
Each organization is unique, with its own mission to fulfil and its own stakeholders to satisfy. The terms of reference (TORs) of each evaluation will vary according to the situation (including the interests of the partners, above) and should be negotiated at the outset between IDRC and those within the partner institution in a position to effect organizational change.
The TORs describe the broad areas upon which the partners intend to focus, and each evaluation will need to have defined information needs. For example, will the spotlight be solely on performance? What is the time span in which performance will be considered? Will underlying institutional capacity be considered as well? Which areas of capacity? Who is doing what in the course of gathering data, i.e. what tasks fall to external experts and what might be topics for self-study? Finally, what will the budget be for the evaluation effort?
A specific plan should be set in writing, detailing the steps of how the terms of reference will be carried out. The workplan is the point at which partners come to agreement and formalize a contract regarding their working relationship. In the workplan, specific questions are identified, methodologies are settled upon, and values are clarified.
Factors to be negotiated include the specific types of data to be collected within each area and appropriate indicators of performance (which are only suggested in this guide and need to be refined and further developed, as befits each situation). It is essential that all parties agree on fair and legitimate indicators, otherwise the assessment process will have little credibility or positive potential for reform.
Value judgments will ultimately need to be imposed upon the performance indicators, and these, too, will need to be negotiated. For instance, how much published research constitutes an adequate output? What dollar figures attached to external funds garnered or research contracts are considered healthy?
Once the types of data to be collected are decided upon and delineated in the workplan, concerns typically arise about the complexity of the information and of the large measure of time and expense it will take to amass and analyze it. Approaches to data collection and analysis are custom-tailored for each institution based upon the type of data that is available and the financial feasibility of the effort, in accordance with the budget. Much can be done internally, drawing on existing management and administrative practices.
After the profiling process, transmitting the results of the exercise to interested stakeholders (both within the organization and external to it) is an essential step. Employing multiple media to get the message out is generally more successful than relying on people to read the written report. The main issue is to ensure that those who need to learn the results actually hear the feedback. Effective methods to convey information include formal and informal talks and workshops, which can be ongoing during the profiling process.
Once the profiling process is complete, strategies to address the findings can be incorporated within the organization's strategic planning process. Indeed, they may help to inspire it.