The Board Assessment Tools Cheat Sheet

From BrainyCP
Revision as of 16:16, 18 December 2021 by EnidCronan (talk | contribs) (Created page with "If your organization doesn't have one, now will be the perfect period to introduce a Program Evaluation system.<br><br>Why is this the opportune time for your organization to...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

If your organization doesn't have one, now will be the perfect period to introduce a Program Evaluation system.

Why is this the opportune time for your organization to apply an outcomes management, (Program Evaluation) System?

Performance evaluation systems can be classified along a number of dimensions that capture variations within their structure, content, and process characteristics. Among the most critical dimensions will be the following:

Who/what is evaluated? Do we evaluate the individual, the workgroup, the division?

Who performs (and has input into) the evaluation? Is it produced by each individual's immediate supervisor? Peers, subordinates, or customers? How much input does the person being evaluated has in to the evaluation and in appealing the final results?

Time frame: short to long. What is the time period over which data are collected (either formally and objectively or informally) before evaluations are rendered?

Objective/formulaic versus subjective/impressionistic evaluations. In some cases, performance is measured very objectively, using unambiguous measures of different facets of performance. For instance, a salesperson may be scored on Euros sales, new customers developed, and increases in orders by old customers, and each one of these being put on some standard scale (e.g., standard deviations from the mean performance of salesmen in the organization) and after that weighted 40%, 40%, and 20%, respectively. In contrast, employees in a facility could be evaluated and rated based on the subjective overall impressions of their immediate superiors.

When objective or formulaic evaluations are used, there is the further issue of how closely tailored the formula must be to the matter of each individual. At one extreme, every similarly situated individual within the firm (say, every salesperson) is evaluated using the same rigid formula. The middle ground includes cases through which people are evaluated against their own previous performance; improvements are noted, but the same categories are used for each individual. At another extreme are systems in which each individual in each period has a specially tailored set of goals and objectives. A prime example of this is management by objectives schemes, through which each individual takes part in designing his or her group of objectives.

Relative versus absolute performance. In certain instances, employees are evaluated upon an absolute scale-for example, sales volume, units produced a week, touchdowns scored, or dollar value of hours billed to clients. In other instances, performance is evaluated on some sort of relative basis, or performance is measured on a mix of absolute and relative performance. Occasionally, the benchmark that is used is the performance of other individuals, either in the organization or outside, who are presumed to face the same productive environment and constraints and also to possess similar capability levels. In other cases, performance is measured relative to the individual's own previous performance.

Forced distribution versus unspecified percentages. When summary categories are used, a forced distribution (numerous percent in category 1, numerous in category 2, etc.) might be employed, or perhaps the percentages may go unspecified. Note that where forced distributions are used, there must be some sort of relative performance evaluation going on, even if only implicitly.

Multi-source versus single-source evaluation. In some systems, data are gathered entirely or largely from a single source, for example the individual's supervisor. Other evaluation systems gather performance appraisals from many sources-customers, peers, supervisors, and so on-where each source is asked to appraise those aspects of performance that the source can reasonably be expected to know about.

Multi-criterion versus single summary statistic. In probably the majority of performance evaluation systems, all the data are ultimately massaged in to a single summary rating statistic of overall performance. Many dimensions of performance may enter into this statistic, but the final outcome is superficial. In some other systems, there is absolutely no try head to presolar.physics.wustl.edu formulate a single statistic. In the middle are systems where there is a summary statistic that's very coarse (just about everyone is in the same category), grading many dimensions.