Avinash K. Dixit & Barry J. Nalebuff
NY: W.W. Norton, 2008
i. Absence of uncertainty
ii. Situation is completely determined (no chance determinants)
iii. Other’s objectives are known (but do not assume they are the same as yours)
i. Need to balance “self-regarding” & “other-regarding” behavior
i. At best, ties its rival
ii. Always comes close to rival strategies
iii. At worst, ends up getting beaten by one defection
iv. “Flawed” strategy
1. mistake “echoes” back & forth
2. will never accept punishment without hitting back—creates “cycles” or “reprisals”
3. no way to say “enough is enough”
i. People will punish “social cheaters”
ii. prospect of punishment increases contributions in the first place
i. detection of cheating (as immediate—and targeted—as possible)
ii. nature of punishment (external or intrinsic)
iii. clarity (boundaries and consequences should be clear from the outset)
iv. certainty (of both reward and punishment)
v. size (“principle of graduation”—let the punishment fit the crime)
vi. repetition (discounting of future benefits, likelihood of continuing relationship)
i. Contributes to clarity among the players
ii. Unnecessary punishment among the players is avoided
iii. But it harms the general public’s interest
Chapter 4 A Beautiful Equilibrium
i. Rule 3: Eliminate dominated strategies from consideration.
ii. Remove all dominated strategies and all “never best” strategies
iii. Look for “mutual best” response cells
iv. Rule 4: Look for an equilibrium, a pair of strategies in which each player’s action is the best response to the other’s.
v. If there is a unique equilibrium of this kind, there are good arguments why all players should choose it.
i. Requires not only choosing best response based on belief about the other players’ actions, but also requires that those beliefs are correct!
i. Minimize the opponent’s maximum payoff (minimax), while
ii. The opponent attempts to maximize his own minimum payoff (maximin)
iii. If both do this, minimax equals maximin (vonNeumann-Morgenstern theorem—which is a special case of the Nash equilibrium)
i. Cannot rely on opponent to randomize—you need to use your best mix to keep your opponent using his.
ii. In one-shot games, keep options open as long as possible, and choose at the last possible moment.
i. seize the first-mover status to declare a planned course of action
ii. be credible (when the time comes, you will actually choose it—which implies that it will be your best choice in that situation)
i. “Commitment”—unconditional strategic move that restricts one’s future actions
ii. “Threats & promises”—conditional strategic moves that fix in advance a response rule, even when they require one to act against one’s own interest.
1. “Threat”—response rule that punishes players who fail to act as desired
2. “Promise”—response rule that offers to reward others who act as desired
iii. Deterrence (stop others from doing something that otherwise they would do) or Compellence (cause others to do something they otherwise would not do).
iv. Response rules that do not promise a change in behavior may be
1. Warning—showing a self-interest to carry out a threat
2. Assurance—showing a self-interest to carry out a promise
i. In sequential games, if there is a second-mover advantage, one can benefit by arranging that the opponent moves first.
ii. Other times, goal will be to prevent opponent from making an unconditional commitment (Sun Tzu’s advice)
iii. It is never advantageous to allow others to threaten you
i. Distinction between threat & promise depends on what is called the status quo
1. a compellent threat is like a deterrent promise, with a change of status quo.
2. Cost: A threat can be less costly (especially if it is successful); a promise, if successful, must be fulfilled.
a. Deterrence has no timeline, and can be achieved more simply and better than a threat.
b. Compellence has a deadline (and can be defeated by “salami tactics”). One’s purpose is better achieved by a promise.
i. Clarity--To make a threat or promise, one must communicate clearly to the other player what actions will bring what punishment (or reward)
ii. Certainty--The other player must believe the threat or promise
1. This need not deny the presence of risk
2. May be implemented in multiple small steps (to avoid salami tactics).
3. Keep threats at the smallest level needed to keep them effective (“principle of graduation,” remember?)
4. Brinksmanship—start small, but deliberately and gradually raise the size of the threat (risk)
a. Increase the risk of the (same) bad thing happening
b. Most threats carry a risk of error, and therefore an element of brinksmanship
c. Even with the best of care, brinksmanship may fail (the bad thing may actually happen)
i. Write contracts to back up your resolve
1. enforcing party must have some independent incentive to do so
ii. Establish and use reputation
i. Cut off communication (can make action truly irreversible)
ii. Burn bridges behind you
iii. Leave the outcome beyond your control (or even to chance)
iv. Move in small steps
1. lessen the risk from breaking the contract
2. use when large degree of commitment is infeasible
3. to avoid unraveling of trust, there should be no clear final step
i. Develop credibility through teamwork
ii. Employ mandated agents
1. especially useful if other bonds of friendship or social links exist that you are reluctant to break
ii. Reputation (you can neutralize the other’s reputation by keeping it secret)
iii. Communication (one can be unavailable to receive a communication)
iv. Burning bridges
v. Moving in steps (resist in small steps—salami tactics)
vi. Mandated agents (refuse to deal with the agent and demand to speak to the principal)
i. It must be unprofitable when the truth differs from what you want to convey
i. Works because the cost of using the device is less for those you want to attract than for those you want to avoid
ii. Action serves to discriminate between possible types
i. Bureaucratic delay may not be due to inefficiency, but a strategy for coping with information asymmetry.
ii. Benefits in kind can serve as a screening device
iii. “The dog that doesn’t bark”—not sending a signal also sends a signal. (E.g., an opportunity to invest could imply a requirement to coinvest)
iv. Countersignaling—sometimes the most powerful signal you send is that you don’t need to signal (nouveau riche versus old money)
i. Pooling equilibrium (of signaling game)—all types take the same action, rendering the signal uninformative
ii. Separating equilibrium—one type signals and others do not, thus distinguishing the types.
iii. Most actions convey only partial information (they are “semi-separating”). As a result, there is a probabilistic element to the information obtained.
1. You can know the likelihood of a distinction, but not the true character of a single case (probabilities, not payoffs)
2. Bayes’ Rule is used to calculate probabilities.
i. Guarantees can be offered to offset social dynamics and stabilize a mid-range equilibrium
i. Sometimes preferable to consider reforms only as a package rather than a series of small steps
ii. Sometimes preferable to fail at a very difficult task—so sometimes it might be preferable to increase your chances of failing in order to reduce the consequences.
i. Everyone makes the same choice, but it is the wrong one
ii. Some make one choice, but the proportion is wrong (not optimal) because spillovers on others were not taken into consideration
iii. Choice may lead to an extreme equilibrium, rather than a more desirable mid-range
i. Choices lead stepwise down a wrong path
ii. Excessive homogeneity (mutually reinforcing expectations)
iii. No equilibrium is found
i. History matters (bandwagon effect)—need a critical mass to change (“tip”) to a more optimal equilibrium
ii. Much of what matters is outside the marketplace (unpriced goods have no invisible hand to guide selfish behavior)
i. Bid until the price exceeds your value, then drop out
ii. Problem is to determine “value”
1. private value—independent of others’ assessment
2. common value—item has same value for all, although each may have a different view as to what that value is.
i. All start as bidders, and drop out as value is passed.
ii. Winning bidder pays the value at which the pentultimate bidder dropped out.
iii. Item is sold to person with highest valuation, seller receives payment equal to second highest valuation.
i. All bids are sealed. Winning bidder pays price of second highest bid.
ii. All bidders have a dominant strategy—bid their true valuation
iii. Implications of Vickrey Auctions
1. Even if you change the rules of the game, players will adapt their strategies and precisely offset those changes.
2. Online auctions (“proxy bidding”)
3. “Sniping”—wait until last minute to enter best proxy bid
4. Attempt to gain strategic advantage by withholding information about one’s true valuation
5. Sniping may be explained by people not knowing their own valuation.
iv. “Winner’s curse”—winning a bid and discovering it isn’t worth what one thought
1. Solved by “consequentialism”—look ahead for consequences of one’s action, and assume that situation is the relevant one at the time of initial play (“Don’t ask a question if you don’t want to hear the answer.”)
2. As a result, your bids will often be rejected because you underestimated the value. But you don’t have to deal with the undervalued good, so it doesn’t matter.
i. Seal bid in envelope. Highest bid wins.
ii. Never bid your valuation (or, worse, more than your valuation)
1. It guarantees that you break even at best
2. Strategy is dominated by bidding something below your valuation (or, in a procurement auction, bidding a price something above your true price).
i. Auction starts with high price that declines; first bidder to stop the decline pays that price.
ii. At optimal bid, savings from paying lower bid is no longer worth increased risk of losing the prize
i. Pre-emption game—First person to launch has a chance to own the market, provided they succeed.
1. wait until you are fully ready and miss the opportunity
2. jump in too soon and fail
ii. War of attrition—instead of who goes in first, game is who gives in first
iii. Case study of Spectrum Game
1. opportunity for tacit cooperation
2. when two games are combined into one, creates opportunity to employ strategies that go across the two games
3. can employ punishment/cooperation that would otherwise be impossible without explicit collusion
4. Moral: If you don’t like the game, look for the larger game.
i. Talmudic principle of the divided cloth
ii. If BATNAs are not fixed, opens up strategy of influencing the BATNAs.
i. Different ideas about what constitutes success
ii. Strikes are an example of signaling
iii. Brinksmanship (like a strike) is a strategy for the stronger of two parties—the one that fears a breakdown less.
i. Put all issues into common bargaining pot, exploit differences in relative valuation to achieve outcomes that are better for everyone.
ii. Joining issues together opens possibility of using one bargaining game to generate threats in another
iii. “Virtual strike” can work while both sides are still talking.
iv. A multiround game allows the receiving side to show that one’s theory of their values is incorrect.
v. Rubinstein Bargaining: stepwise negotiation, taking into account the value of time delay.
1. Person making proposal has an advantage, proportional to the degree of impatience of the other party.
2. The lower the cost to waiting, the less the advantage of going first.
i. It’s okay to speak the truth when it doesn’t matter.
ii. Particularly problematic when there are many candidates—polls can become self-fulfilling prophecies (“front-runner effect”)
iii. Anyone’s vote affects the outcome only when it creates or breaks a tie.
i. However, only applicable when choice is one-dimensional.
i. Always an element of chance in behavior—if chance element is too large, reward will be only poorly related to effort.
ii. However, chance is highly correlated across workers. Incentives can be based on relative performance. (This also works for suppliers)
i. Quota is all-or-none, so does not capture all of effort; also has problem of setting quota high enough to stretch without being impossible to reach
ii. Linear payment is more robust, less prone to manipulation, than proportional
i. Spread of payments (good vs. bad outcomes) determines incentive—wider the spread, more powerful the incentive
ii. Average payment must meet the participation constraint (how much could have been earned in other opportunities)
i. If tasks are substitutes for each other, strong incentive to one will hurt the effort in another.
ii. Therefore, both incentives have to be kept weaker than if each task were considered in isolation.
iii. If tasks are complements, a single incentive affect performance in both
i. Many workers—especially in nonprofits, some public sector, and innovative/creative tasks—are intrinsically motivated when performing tasks that improve their self-image and give them a sense of autonomy.
ii. Such tasks need fewer extrinsic motivations (material incentives), and can even diminish the intrinsic motivation.
iii. Simple existence of material penalties (lower pay or dismissal for failure) may undermine intrinsic motivation.
iv. Should offer significant financial rewards or none at all; small amounts might lead to worst of all outcomes
v. Overall strength of incentives is inversely proportional to the number of different supervisors
i. Keeping records of individual strings of success/failure
ii. Have several experts working on related projects
© 2009 A.J.Filipovitch
Revised 6 June 2009