Decision Analysis:  Calculations


Decision trees are not particularly difficult mathematically. They are primarily a visual presentation of the consequences (and probabilities) of choice. What mathematical sophistication they have comes from combining probabilities to estimate the final outcomes.

When choices are binary (i.e., only two choices at each node), and the same probabilities hold for each repetition of the choice, the final probabilities can be determined by the "binomial expansion" theorem. For example, suppose you are trying to decide how much land to allocate for industrial expansion, and you estimate that the odds are that 10% of the requests will be fore Heavy Industrial and 90% for Light Industrial, and you expect 3 requests for Industrial property. You can use the binomial theorem to estimate the mix of requests you could expect to receive.

The binomial theorem is (q+p)**n = Sum(W[p**x][q**n-x]). The exponent(n) is the number of repetitions ("trials"). <p> is the probability of one outcome (say, Light Industry--.90) and <q> is the probability of the other(Heavy Industry--.10) "W" is <n given x> or <the number of trials (n) given(x) successes>. W is calculated as n!/x!(n-x)!.

While occasionally the binomial theorem may help derive the probabilities for a branch in a decision tree, generally a whole tree will not be a repetition of the same binary choice. As has already been explained, usually the joint probabilities are derived by the counting rule (joint probability of <n> and <m> = <n>*<m>) if the probabilities are independent, or by determining the conditional probabilities if they are interdependent.

In calculating the final outcomes, the analyst works backwards from the final payoffs and selects either the "expected" value of the node (if it is a chance node) or the desired value of the node (if it is a choice node).

The expected value of a chance node is the "average" of the probable outcomes it represents, or E(x) = Sum(P[x]*O[x]). The Expected Value of a chance node (x) is the sum of each of the outcomes multiplied by its attendant probability. So, to use the example above, if a Light Industry was worth, say, $100,000 a year in property taxes and a Heavy Industry was worth $500,000, the Expected Value of the Industrial Sector would be:

In other words, while the maximum property tax increase would be $1.5 million, or as little as $300,000, it is most likely that it will be$420,000. Note, by the way, that in any given event this is impossible: The outcomes can only be $300,000 or $700,000 or $1,100,000 or $1,500,000.But if this experiment were to run an indefinitely large number of times, the results would average out to $420,000.

While the expected value of a chance node is based on all the possible outcomes, the value at a decision node is limited to the preferred outcome(Stokey & Zeckhauser [1978] discuss this). Probabilities are used at chance nodes because the outcomes are not in the control of the decisionmaker. But choice nodes are, and so their value is not the "average" value of the probable outcomes but the value of the choice which the decisionmaker will choose--and, being rational, that will be the best choice among the options available. That branch is assigned a probability of <1.0> and all other branches are assigned probabilities of <0>.

At each node, the "best choice" is a selection from the outcomes of all the choices "upstream" or further along the branches of the tree. So, in analysis, one works backwards, starting with the final outcomes for every branch and evaluating the previous node for the outcomes it leads to; from those, one works back to the penultimate nodes and evaluates them; and so on back until one arrives at the root of the tree.

Figure 1

Figure 2

 


 

609

 

 

© 1996 A.J.Filipovitch
Revised 2 November 2005