Decision Analysis Definitions
A decision tree is a stem-and-leaf diagram of the
logical structure of the decision(s) which must be made to resolve an issue.
It has four elements:
- Choice nodes: A choice node (also called a "decision
node") is an branch in the tree which represents a specific decision in the control of the decision maker. Stemming from the decision node are all the possible choices of action which the decision maker may select.
- Chance nodes: A chance node is a branch in the
tree which represents a family of possible outcomes which are not under the control (or are only partly under the control) of the decision maker. Some outcomes are assigned to chance nodes because they will be due to choices made by others, but those decisions have not yet been determined. For example, if you build a civic center will anybody come? Other outcomes may be a direct result of choices made by the decision maker, but there is insufficient information to know defintively which of several possible outcomes will occur. For example, will the flood wall be high enough to contain runoff for the foreseeable future?
- Probabilities: Probabilities are assigned for each possible outcome of a chance node. They represent the uncertainty inherent
in the event. Krueckeberg & Silvers (1974) has an excellent
discussion of probability in terms of decision trees.
The probability of an event is the number of times
it occurs divided the number of times you looked for it. It is a fraction, and must lie somewhere between <0> and <1>. Both
<0> and <1> are "certainties"--<0>
is the certainty that an event won't happen, <1>
is the certainty that it will. The sum of the all the probabilities for any event (node) must be <1.0> (in other words, it is certain that something has to happen).
However, the probability of an independent event is not necessarily its probability when it is interdependent with a prior
occurrence. The data may show that 30% of the housing in a particular
city is deteriorated (i.e., there is a .30 probability that a randomly
selected house in the city would be judged "deteriorated").
But the data may also show that 90% of the deteriorated housing in the city is in one particular neighborhood (i.e., there is a .90 probability that a house selected at random from neighborhood A would be judged "deteriorated" and--depending on the proportion of the city's housing stock in neighborhood A--perhaps a .10 probability that a house selected at random from any other neighborhood would be judged "deteriorated"). Knowing the conditional probability (the probability given interdependence) of an event is an example of additional information which can change the probabilities of
a Chance Node by adding a new Choice Node ("select on the basis of neighborhood").
When two events are independent (at least as far as your model has specified the relationships), their joint probability is determined by the "rule of counting"--the probability of <m> and <n> is <Pm>*<Pn>. For example, if the probability of alcohol abuse and the probability of deteriorated housing are not tied to each other, and if the probability of the first is .5 and the probability of the second is .2, then the probability of finding an alcoholic resident in deteriorated housing should be .1 (<.5>*<.2>=<.1>).
When two events are interdependent (i.e., when one is "conditional" on the other), their joint probability is found by multiplying the probability of the first by "the conditional probability of the second, given the first." Going back to the neighborhood and housing example, the probability of a house being deteriorated, given that it is in neighborhood A, is <.9>. If the probability of a house being in neighborhood A is, say <.25>, then the joint probability of randomly picking a deteriorated house from neighborhood A is <.225> (<.25>*<.90>). When the prior event is a certainty--a <0> and a <1>--the
resulting joint probability will be determined by the proportion
of the event falling into either arm of the certainty.
It would take us too far afield to explore here the implications of conditional probability, but there is an Appendix, based on pp. 74-75 in Krueckeberg & Silvers (1974), which describes various relationships between x & y, displayed in chart, tree, and scatterplot form. It is worth your study.
- Payoffs: Payoffs are the value, or the benefit,
which results from a choice node. Ultimately, it is the final result
of a chain of choices-the size of the jackpot, should you win, or the size of your first paycheck after you graduate from college. Note that "payoff" is not expressed in relative terms. It does not take into account the probability that you will achieve it (i.e., you may have a better chance of being struck by lightning than winning the Lotto). Usually, it does not even take into account the size of other payoffs (i.e., it is the size of your first paycheck, not the difference between your paycheck and the check you'd receive had you gone to work after high school).
- More sophisticated measures (such as the marginal increase in earning power or the expected value of a jackpot-the price of a jackpot weighted by its probability) are the final results of an analysis, rather than an element of the analysis, and are called
Outcomes. Outcomes are the weighted results of a chain of chance and choice nodes. They represent not only the payoffs, but the likelihood that one will achieve those payoffs and the comparison between outcomes.
© 1996 A.J.Filipovitch
Revised 10 October 96