9 Super Useful Tips Concerning Company Evaluations

From BrainyCP
Jump to: navigation, search

If your organization isn't going to have one, now will be the perfect time to introduce a Program Evaluation system.

Why is this the opportune time for your organization to implement an outcomes management, (Program Evaluation) System?

Performance evaluation systems may be classified along a range of dimensions that capture variations in their structure, content, and process characteristics. Among-the most critical dimensions are the following:

Who/what is evaluated? Do we evaluate the individual, the workgroup, the division?

Who performs (and has input into) the evaluation? Is it done by each individual's immediate supervisor? Peers, subordinates, or customers? Just how much input does the individual being evaluated has in to the evaluation as well as in appealing the outcome?

Time-frame: short to long. What is the time period over which data are collected (either formally and objectively or informally) before evaluations are rendered?

Objective/formulaic versus subjective/impressionistic evaluations. In some cases, performance is measured very objectively, using unambiguous measures of different aspects of performance. For example, a salesperson might be scored on Euros sales, new customers developed, and increases in orders by old customers, and these being put on some standard scale (e.g., standard deviations from the mean performance of salesmen in the organization) and then weighted 40%, 40%, and 20%, respectively. Alternatively, employees in a facility may very well be evaluated and rated in accordance with the subjective overall impressions of their immediate superiors.

When objective or formulaic evaluations are used, there is the further issue of how closely tailored the formula should be to the situation of each individual. At one extreme, every similarly situated individual in the firm (say, every salesperson) is evaluated using the exact same rigid formula. The middle ground includes cases in which individuals are evaluated against their own previous performance; improvements are noted, but the same categories are utilized for each individual. At the other extreme are systems in which each individual in each period has a specially tailored set of goals and objectives. A prime example of this really is management by objectives schemes, through which each individual takes part in designing his or her group of objectives.

Relative versus absolute performance. In some instances, employees are evaluated on an absolute scale-for example, sales volume, units produced a week, touchdowns scored, or dollar value of hours billed to clients. In other instances, performance is evaluated on some sort of relative basis, or performance is measured on a mixture of absolute and relative performance. Routinely, the benchmark that is used is the performance of other individuals, either in the organization or outside, who are presumed to face the exact same productive environment and constraints and to possess similar capability levels. In other cases, performance is measured relative to the individual's own previous performance.

Forced distribution versus unspecified percentages. When summary categories are used, a forced distribution (so many percent in category 1, numerous in category 2, etc.) could possibly be employed, or the percentages may go unspecified. Note that where forced distributions are used, there must be some sort of relative performance evaluation going on, even if only implicitly.

Multi-source versus single-source evaluation. In some systems, data are gathered entirely or largely from just one source, such as the person's supervisor. Other evaluation systems gather performance appraisals from many sources-customers, peers, supervisors, and so on-where each source is asked to appraise those facets of performance that click the following page source can reasonably be expected to know about.

Multi-criterion versus single summary statistic. In probably the majority of performance evaluation systems, all the data are ultimately massaged into a single summary rating statistic of overall performance. Many dimensions of performance may enter into this statistic, although the final outcome is superficial. In certain other systems, there's no attempt to formulate a single statistic. In the middle are systems where there's a summary statistic that's very coarse (just about everyone is within the same category), grading many dimensions.