Capital Improvement Programming: Definitions


Capital improvements planning is commonly used by both city managers and city planners. As a result, there are more than the usual number of technical terms, drawn from both domains.

The mathematics of capital improvements planning is fairly simple. It is composed of "weighted ranking," to determine the relative priority of projects, and a simple system of "running totals," which can be used to decide where to cut off funding for each year.

The "running total" process is designed to add the cost of each additional program to the total cost of programs to be funded for each year. The user can try different combinations of programs to get the best use of available funds without going over the budget.

The "weighted ranking" process is equally simple to compute, but it raises serious conceptual issues. The process is based on the interaction of the project's score on each criterion and the weight given to each of the criteria. The ranking of each project is based on the sum of its weighted scores for each of the criteria. The project with the highest score is judged to have the highest priority. Because of the interaction between them, the weight and the score magnify each other. Small differences become larger.

This interaction effect is both the strength and the weakness of the weighted ranking process. It allows the decision maker to take account of criteria which are unequal in their importance. It also allows the decision maker to compare apples and bananas. If one is concerned with fruit, and has a preference for apples over bananas, the technique is quite effective. If one is concerned with color or shapes, the technique makes no sense. In other words, the criteria must be qualitatively similar; quantitative differences (how much or how little) are significant only between things that are already basically similar (qualitatively alike). The mathematics of capital improvement programming can not, however, make such a distinction. If you tell it that apples are worth "2" and bananas are worth "1", you could end up eating kumquats.

Most capital improvements projects are basically similar: they are concerned with allocating resources to build long-lived physical structures. If there is a problem of "noncomparability" (comparing apples and oranges), it is more likely to be at the level of the programs which the capital projects support. On what basis does one compare the need for housing street-people with the need for street improvements in a residential neighborhood? The prudent analyst will recognize that the formal criteria of a capital improvement planning model are secondary to the valuation which comes from the political process. The choice of criteria and the assignment of weights to the criteria only partially reflect this consideration.

Even when the choices are between basically similar projects, caution still must be exercised in using the model. There is still the possibility that projects which are essentially similar could be mis-ranked because of "measurement error." Any numerical value, when used as a measure, represents not a point but a range of values. The value "2," for instance, as the measure of a project's value on one of the capital improvement criteria, represents all values between "1.5" and "2.5." Consider this case: one project might have a true score of
2.5 on a criterion with a weight of 2, and a score of 1.5 on a criterion with a weight of 1. Another project might score 1.5 on the more important criterion and 2.5 on the other. The true score of the first should be 6.5 (2.5*2 and 1.5*1), and the true score for the second should be 5.5 (1.5*2 and 2.5*1); yet both are assigned the same score by the model for these two criteria. This problem will occur no matter how many digits one uses for scoring--there will always be imprecision at the level of the next decimal place. The more criteria the model includes, the greater the likelihood of measurement error.

There are several strategies for dealing with measurement error. One may assume that, when there are many independent measurements involved, the measurement errors will balance each other out, and thus one may ignore the issue. In the physical sciences, the rule of thumb is to report results to the same level of precision as the least precise variable--if one is dealing with single-digit weights and scores, then the final total should be rounded to a single-digit number. This is the most conservative solution, and might result in many projects sharing a tied rank. A compromise solution might be to round off the last digit for the final ranking. Whichever strategy is employed, one should always bear in mind that there is some imprecision built into the model.


CIP
604

© 1996 A.J.Filipovitch
Revised 11 November 96