In our earlier study of matching markets, we demonstrated that market-clearing prices exist and that the market can adjust its prices over time to reach them. We now turn to a “single-shot” version of this problem: We still have a set of buyers and a set of items for sale, but this time we consider a situation where the sale of the items is being administered by a central entity (“the auctioneer”) as opposed to it happening in a decentralized way in a market.
We are interested in studying mechanisms that such an auctioneer can use to assign items to buyers in a way that maximizes social value (and thus, by Corollary 8.1, also social welfare). (As an aside, in many situations the auctioneer may also be the seller of the items [e.g., in the context of Google selling advertising slots]; in such situations, the auctioneer may instead want to choose a mechanism that maximizes their own revenue rather than social welfare. This leads to a separate question that can be considered in the same framework. We here focus on “ideal” mechanisms adopted by a “benevolent” auctioneer and thus only consider social value maximization.)
To enable the mechanism to find an assignment that maximizes social value, the buyers need to report their valuations to the mechanism. An issue that arises here is that buyers may be trying to “game the system” and misreport their valuations in order to get a more advantagous assignment. The mechanism must appropriately set prices so as to incentivize the buyers to truthfully report their valuations.
To consider this question, and generalizations of it, let us begin by introducing a general framework for mechanism design.
10.1 The Mechanism Design Model
Consider a scenario where we have:
We refer to such a tuple as a mechanism design context, or simply a context.
Let us now consider the following process induced by a mechanism :
In other words, player receives as utility its value of the outcome minus the payment it is charged by the mechanism. This type of a utility function is referred to as a quasi-linear utility function: note that it generalizes the utility function used in our study of matching markets.2
To better understand the notion of a context and the above-described process, let us consider some basic examples, starting with the concept of an auction.
First- and Second-price auctions Consider the following setting:
We refer to a context as above as a single-item auction context.
Now, consider the following first-price auction mechanism for any such single-item auction context: The mechanism —run by the auctioneer, upon receiving reports (i.e., bids) from each player —returns , where:
We can similarly define a second-price auction, where the winner is still the highest bidder, but they instead need only pay the second-highest bid; that is,
10.2 Goals of Mechanism Design
The goal of mechanism design is to, given some context , design a mechanism for this context that ensures that “rational play” leads to some “desirable” outcome, no matter what types the players have.
Desirability of outcomes As mentioned above, in our study of mechanism design, the notion of desirability will be to maximize social value (but as mentioned, other notions of desirability are also useful—for instance, in the setting of an auction, we may consider the notion of maximizing the seller’s revenue). Given a context , let the social value be given by
Given a context and a type profile , we say that a state maximizes social value with respect to and if
We additionally say that an outcome maximizes social value if maximizes social value.
We can also define social welfare as we did in chapter 8 by additionally incorporating the prices paid by the players as well as the profit made by the mechanism operator (e.g., the “seller” in the setting of an auction); by the same argument as in Claim 8.1, these prices will cancel out with the mechanism operator’s profit, and so social welfare will be equal to social value.
Rational play and truthful implementation Let us now turn to defining what we mean by “rational play.” Several interpretations of this are possible. Ideally, we would like to have a mechanism where “rational” players will truthfully report their types. For instance, in an auction, this would mean that players truthfully submit their actual valuations to the auctioneer (and do not attempt to manipulate the mechanism). To formalize this using notions from game theory, we must first view the process as a game: Given a context a type profile , and a mechanism , let denote the game induced by , and , where each player chooses some report (as their action) and their utility is defined as above, based on the state and prices ultimately chosen by . (More precisely, the actions space for each player is and the utility for player is defined as where is the output of .)
We now have the following natural notion of what it means for a mechanism to be truthful.
Definition 10.1. A mechanism is dominant-strategy truthful (DST) for the context if, for every , is a dominant strategy for player in .
So, if is DST, then players are incentivized to report their true type (e.g., for auctions, their true valuation of the object) regardless of what other players choose to do! This is a relatively strong notion of truthful implementation; we may also consider a seemingly weaker notion, which simply requires that the action profile where all players truthfully report their types is a Nash equilibrium:
Definition 10.2. A mechanism is Nash truthful (NT) for the context if, for every , is a Nash equilibrium in .
As it turns out, these notions are equivalent:
Claim 10.1. A mechanism is DST if and only if it is NT.
Proof. If is DST, then it is clearly NT. Conversely, let us assume for the sake of contradiction that is NT and not DST; that is, there exist some types , some player and reports , such that is not a best response with respect to assuming players have the types . We then claim that is not a best response with respect to assuming players have the types —this directly follows from the fact that the utility function of player only depends on ’s valuation and payment and is independent of other players’ types. It follows that is not NT since wants to deviate given the types , which is a contradiction.
■
Given the notion of DST, we can now define what it means for a mechanism to implement social value maximization.
Definition 10.3. A mechanism is said to DST-implement social value maximization for the context if is DST for and, for every , maximizes social value with respect to and .
Nash implementation We may also consider a weaker notion of implementation, which we refer to as Nash implementation (in contrast to Nash-truthful implementation). This notion only requires the existence of some social value maximizing PNE; this PNE need not, however, be truthful (as in the notion of Nash-truthful implementation).
Definition 10.4. A mechanism is said to Nash-implement social value maximization for the context if, for every , there exists a PNE for such that maximizes social value with respect to and .
Revisiting the first- and second-price auctions Let us now examine whether the examples of first-price and second-price auctions satisfy our desiderata. Clearly, the simple first-price auction is not truthful; if everyone else values an object at 0, and you value it at 100, you would clearly be better off bidding something less than your actual value (say, 1), thereby saving a lot of money buying the item! Generally, bidding your true value always provides a utility of 0 (since you pay what you bid), so underbidding can never be worse. However, for the second-price auction, we have the following beautiful result.
Theorem 10.1. The second-price auction mechanism DST-implements social value maximization.
Proof. We start by showing that the second-price auction is DST. By Claim 10.1, it actually suffices to show that it is NT, which we now show.
Consider some type (i.e., valuation) profile . We need to show that is a PNE. Consider some player with valuation , and let be the highest valuation of all the players, excluding ; that is:
First, note that by bidding (i.e., by bidding truthfully), player can never end up with negative utility—either the bid is losing (and he gets 0 utility) or he gets the item for the second highest bid, which is at most . We next argue that cannot improve their utility by either overbidding (bidding above ) or underbidding (bidding below ), by considering the following cases:
Case 1: . As long as bids below (or at while losing the tie-break), his utility remains 0. In contrast, if he bids above (or at while winning the tie-break), his utility becomes negative—he needs to pay for an item that he values at .
Case 2: . As long as bids above (or at while winning the tie-break), his utility remains unchanged—he still needs to pay . In contrast, if bids below (or at while losing the tie-break), his utility becomes 0.
Case 3: . If loses the auction his utility is 0, and if he wins, he needs to pay , so again his utility is 0.
Thus, is a PNE, and so the second-price auction is NT and thus also DST. Finally, by construction, if everyone bids truthfully, then the player (or one of the players, in the case of ties) who values the item the most will win the auction; thus, the auction chooses an outcome that maximizes social value.
■
First-price auctions, while not truthful, still turn out to Nash-implement social value maximization.
Theorem 10.2. The first-price auction mechanism Nash-implements social value maximization.
Proof. Given any type (valuation) profile , let be the player (or in the case of ties, the player with the smallest identity) with the highest valuation; in other words, is the player that would have won the auction if everyone were to bid truthfully. Let be the “second-highest” valuation; that is,
and let be the set of players having valuation . We consider two cases:
So in either case is a PNE. Additionally, in both cases, since the object is assigned to the player who values it the most, the auction also implements social value maximization. (Perhaps surprisingly, notice that in this Nash equilibrium the pricing is essentially identical to that in a second-price auction: the player with the highest valuation either bids the second-highest valuation or just slightly above it.)
■
Nash-implementation vs. DST implementation Notice that in the case of a first-price auction, in order for the players to figure out what the Nash equilibrium strategy actually is, they need to know each others’ valuations! While this may be reasonable to assume in a world where the auction is run repeatedly (among the same set of players), it would be difficult to believe that this equilibrium could occur in a single-shot auction. Instead, it seems more likely that players would “shade” their bids (i.e., underbid) based on their beliefs about the other players’ valuations. (Formalizing this requires assumptions about the distributions of players’ valuations; we briefly discuss this setting in the notes at the end of the chapter.)
In contrast, if we have a DST mechanism, everyone knows hows to bid, independent of their beliefs about the other players valuations (and strategies); indeed, this is why DST implementations are so desirable.
10.3 The VCG Mechanism
A natural question following the above discussion is whether we can find a mechanism that implements social value maximization in any context (and not just for single-good auctions). In fact, the Vickrey–Clarke–Groves (VCG) mechanism shows that this is possible; in particular, as we shall later see, we can use this mechanism to design an auction mechanism for matching markets.
Theorem 10.3. For every context , there exists a mechanism that DST-implements social value maximization for .
Proof. Given a context , the mechanism —which we refer to as the plain VCG mechanism—outputs an outcome such that:
If every player truthfully reports their type, then by the definition of , this mechanism will select an outcome that maximizes social value. Let us now argue that is DST. Consider some player with type ; assume the other players submit . Then, if the mechanism selects the outcome , player obtains utility
So, player should submit a report such that chooses to maximize this expression. But, by submitting , will, by construction, pick such a state. Thus, is a dominant strategy for player ; hence, is DST and DST-implements social value maximization.
■
Note that if all players truthfully report their types, then all players get exactly the same utility, namely , where is the state selected by the mechanism .
Computational efficiency of the plain VCG mechanism Note that if we can efficiently find the socially optimal outcome, then the plain VCG mechanism becomes computationally efficient.
The Clarke pivot rule: paying your “externality” In the plain VCG mechanism described above, the operator has to pay the players, which often is not desirable. We can modify the mechanism so that the players instead need to pay the operator. Specifically, if we shift the price of player in a manner that is independent of ’s report, the resulting mechanism will remain DST, since the price adjustment does not affect ’s preference between actions; that is, we now let
for any function . We refer to any such mechanism (obtained by shifting the prices in the plain VCG mechanism by ) as a VCG mechanism. A particularly useful instance—referred to as the Clarke pivot rule—is specified by letting
That is, we are adjusting the prices by the “maximum social value if we were to exclude player from the game.” (As such, just as for the “plain VCG mechanism,” the VCG mechanism with the Clarke pivot rule is computationally efficient if we can efficiently find outcomes maximizing social value.)
Thus, player now needs to pay:
Note that this is always a non-negative amount, since the social value excluding at the state can clearly never be higher than the maximum social value excluding .
A natural interpretation of the VCG mechanism with the Clarke pivot rule is that each player pays for the “harm” in value they cause the other players by being present in the game; this is called the player’s externality. With present, the other players get a total of ; without present, they would get . In other words,
where denotes the social value of all players except , assuming that the player types are , and where is the outcome that maximizes social welfare for everyone including (given the player types ).
Let us look at two brief examples of a VCG mechanism.
Example: Airport or train station A small town needs to decide whether to build an airport or a train station; let ( corresponds to building the airport and to building the train station.) For every inhabitant of the town (i.e., each player) , let the type space be a finite set of functions from to , representing the monetary value they have for each outcome, and let the valuation function for each player be .
For instance, consider a situation where we only have 2 players, and let (i.e., player 1 strongly needs an airport and but has no need for a train station), and let , (i.e., player 2 weakly prefers a train station). Consider applying the VCG mechanism with the Clarke pivot rule to this context, assuming the players have types , and they truthfully report their types to the mechanism (as proven above, doing so is a dominant strategy). Clearly, the outcome , which maximizes social value, is (with a social value of as opposed to for the outcome ); thus, the VCG mechanism will pick this outcome assuming players truthfully report their types.
Now, consider the externality of each player. Without player 1 present, would be the best outcome, and player 2 would receive 6 in utility. However, in the outcome , player 2 receives only 4; thus, player 1’s externality is 2. Player 2’s externality is 0, since the outcome is the same regardless of whether player 2 is present or not. Thus, player 1 would be charged a price of 2, whereas player 2 would not be charged anything.
Example: Single-item auctions (recovering the second-price auction) Consider applying the VCG mechanism with the Clarke pivot rule for single-item auction contexts. Clearly, the item will be assigned to the player with the highest bid (as this maximizes social value).
The externality of player is equal to the second-highest bid: if were not present, the second-highest bidder would win and receive value equal to their bid, but if is present, gets nothing in value. None of the other players’ value is affected by showing up or not (they still do not get it), so the externality of player is exactly the second-highest bid. (Note that it is important that we consider the value and not utility for players when computing the externality!)
No other players have externalities, since their presence or absence does not affect the outcome ( will win either way). So they pay nothing. Hence, the VCG mechanism with the Clarke pivot rule gives us exactly a second-price auction when applied to single-item auction contexts!
10.4 VCG and Matching Markets
Let us now apply the VCG mechanism to the problem of designing a matching market auction. Recall that a matching market specifies a set of players, a set of objects, and for each player , a valuation function . To view this as a mechanism design problem, we define a matching market context as follows:
We can now directly use VCG mechanisms on the social-choice context to get a matching market auction that DST-implements social value maximization (and hence social welfare maximization). See Figures 10.1 and 10.2 for an illustration of both the plain VCG and VCG with the Clarke pivot rule in the context of matching markets. For concreteness, the VCG mechanism with the Clarke pivot rule proceeds as follows:
Computational efficiency Notice that to efficiently implement this mechanism, we need a way to efficiently find the matching (i.e., allocation) that maximizes social value. If we think of as the weight of the edge in the bipartite graph over , this amounts to finding the maximum-weight bipartite matching of this graph. There are various direct ways of doing this; here, we simply remark that a combination of the market-clearing theorems we have already demonstrated shows how to efficiently find a matching that maximizes social value: Theorem 8.2 gives us a polynomial-time algorithm for finding a market equilibrium , which by Theorem 8.1 maximizes social welfare, and thus by Corollary 8.1, also maximizes also social value.
Revisiting bundles of identical goods Let us return to the case where we have a matching market in which the items are bundles of identical goods; recall that bundle contains goods, and we assume without loss of generality that . We can again specify this as a mechanism design context :
We will refer to such a context as a market-for-bundles context. Let us again consider the VCG mechanism with the Clarke pivot rule for this context. Then proceeds as follows:
But if player is absent, each of these players gets the next-larger bundle, for a total value of
So, the ranked player’s externality (and thus the price charged to them) is the difference between these values:
Note that a seemingly curious property of this auction is that the last ranked player gets the smallest bundle for free. This seems unrealistic. It turns out that this is not a real issue: If the seller also has some value for the items, as already discussed in chapter 8, we could add an “extra buyer” representing the seller’s interests (i.e., the extra buyer’s value would be the seller’s value for an item). If the seller’s value is the lowest one, this values will serve as a reserve price, and the lowest ranked “actual” buyer will always have to pay at least this reserve price.
10.5 Generalized Second-price (GSP) Auctions
While the VCG pricing for matching markets for bundles is relatively simple to compute, the mechanism is still a lot more complicated than the “simple” first- and second-price auctions for the case of single-item auction. As we have seen, in the case of a single-item auction, VCG (with the Clarke pivot rule) actually reduces down to a second-price auction. A natural question is whether we can get a similarly simple mechanism also for multi-item auctions. We restrict our attention to markets for bundles context.
Consider, now, the following natural generalization of the second-price mechanism: assign the bundle to the ranked player, , and let them pay the ranked bid per item—that is,
if and . This mechanism, which was introduced by Google, is referred to as the generalized second-price (GSP) mechanism.
GSP is not DST Despite the fact that the GSP seems like a natural generalization of the second-price auction (which we know is DST for the case of single-item auctions), GSP is not DST when considering multiple bundles! To see this, consider the following example: let us say we have three bundles of items with , , and three players with unit values , , . If the players bid truthfully, player 1 gets the largest bundle () for the second-highest price (), thus earning utility . But if player 1 were to deviate and report their valuation at , they would get the second-largest bundle () at the third-highest price (), earning utility . So in this case, bidding truthfully is not a PNE!
GSP Nash-implements social value maximization However, despite the fact that GSP is not truthful, it is actually the mechanism that Google and other sellers of advertising slots use for sponsored search advertisements! Among other factors, this may be because the simplicity of the auction is appealing—bidders can easily understand how much they will have to pay. In addition, as we now show, GSP does Nash-implement social value maximization:
Theorem 10.4. The GSP mechanism Nash-implements social value maximization in any market-for-bundles context.
Proof. It suffices to show that there is some Nash equilibrium in the induced game that maximizes social value. Let us start by considering the case that player has the ranked valuation for the items being sold (i.e., the players are ordered according to their valuations).
By Theorem 8.2, there exists a market equilibrium in the matching market corresponding to this social choice context. By Theorem 8.1, this equilibrium maximizes social value, and so we can assume, without loss of generality, that in this equilibrium player is assigned bundle (since we ordered players by decreasing valuation and bundles by decreasing size; if a tie causes players to be assigned bundles out of order, we can reorder them accordingly while still maintaining the equilibrium property). Additionally, by the construction in Theorem 8.1, there exists some market equilibrium where the price of the cheapest bundle is 0.
Now, let be the price per item in bundle ; by Theorem 8.3, we have that . Consider the bidding strategy where if and . Since for any , and because we have assumed ties to be broken lexicographically, the GSP mechanism will assign player to bundle , and so maximizes social value by the above. It remains to show that is a PNE.
Notice that, by construction of , player will receive the same bundle as in the market equilibrium (i.e., bundle ) and at the market-clearing price . (Note that player receives it at price , which by construction of the market equilibrium, is equal to .)
Let us now argue that player has no incentives to deviate. Consider some deviation by player . If the deviation leads to the same assignment for , their utility remains the same, so the deviation is not profitable. Let us thus consider some deviation where gets assigned a different bundle . We distinguish two cases:
Thus, in both cases, gets assigned a bundle and has to pay at least for it. By the market-clearing property of , player cannot prefer getting bundle at price to getting its current bundle at price , so in neither case, the deviation can be profitable for . This concludes the proof that is a PNE.
So far, however, we have only considered the case when the players are ordered according to their valuations. To deal with the general case, it suffices to note that the above argument applies unchanged if we replace “player ” with “the ranked player” (where ranking is defined in terms of players’ valuations).
■
10.6 Applications to Sponsored Search
An important real-life example of a matching market auction is the market for sponsored search. Here, a search engine (e.g., Google) sells advertising “slots” for search terms. For instance, when a user makes a search for “learn programming,” Google has four advertisements that appear before the actual search results; see Figure 10.3. To determine which ads to show, Google uses an auction for bundles of identical goods. The items being sold are “clicks” (i.e., instances of a user clicking on the ad), and the “size” of each of the bundles (slots) is the number of clicks an advertiser would expect to get from each of them—the clickthrough rate. (We are incorrectly assuming that clickthrough rates are the same no matter who the advertiser is; this will be discussed shortly.) Each advertisers ’s type, , is its value for a click.
In such a scenario, we can use either the VCG mechanism or the GSP mechanism to sell these advertising slots, and charge the buyers accordingly. Our previous analysis applies directly if we charge the advertiser for each search taking place—that is, we charge per impression of the ad—the (expected) value for advertiser for a slot with clickthrough rate is , just as in a market-for-bundles context.
Charging per click Search engines typically no longer charge per impression of an ad, but instead charge by the click. The auctions can easily be modified to capture this setting by instead charging a user placed on slot per click; in the case of GSP, this simplifies the auction even further, as we can now simply charge the ranked user the ranked bid!
Dealing with ad quality So far in this discussion, we have assumed that the clickthrough rate per slot is independent of the advertiser (or ad) being placed there. This is clearly unrealistic—a relevant ad, for instance, is far more likely to get more clicks than an irrelevant “spam” advertisement. To model this, we assume that each advertiser has a “quality” or discount parameter —an “optimal advertiser” has , but a spammer may have a significantly lower value of —and an advertisement by player placed on slot is assumed to get clicks. We can again model this as a market for bundles; each bundle of items is now a bundle of “fictitious clicks,” where each fictitious click leads to (in expectation) actual clicks to an advertiser . In other words, the (expected) value for player for a fictitious click is (assuming that is correct).
The quality of an advertiser is estimated by the search engine, and we thus assume that mechanism is aware of it. If we assume the search engine’s estimates are actually correct, the following mechanism can be used in this setting:
and output . (Recall that is the price for the impression; thus, is the price per fictitious click, and becomes the price per actual click assuming the quality estimate is correct.)
Notes
The development of mechanism design theory began with the work of Hurwicz [Hur60]. The VCG mechanism was introduced and analyzed in the works of Vickrey [Vic61], Clarke [Cla71], and Groves [Gro73]. The generalized second-price auction was introduced and analyzed in [EOS07].
Let us also mention that the literature on mechanism design often also considers a setting, referred to as Bayesian, where players’ types come from a probability distribution (typically, we assume that player types are independent). In such a setting we may then consider a weaker notion of a “Bayes–Nash truthful” mechanism, where every player’s expected utility (where the expectation is over the distribution of other players’ types) is maximized by truthfully reporting their type; this is formalized by defining a generalized version of a normal-form game called a Bayesian Game, where players first receive a type sampled from and then need to choose their action (without knowing the actual type that was sampled for the other players.) The notion of a Bayes–Nash truthful implementation is no longer equivalent to DST; optimal Bayes–Nash truthful auctions were studied in work of Myerson [Mye81]. Leonid Hurwicz, Eric S. Maskin, and Roger B. Myerson received the Nobel Prize in Economic Sciences (for laying the foundation of mechanism design theory) in 2007.
As mentioned above, the VCG mechanism is computationally efficient as long as we have a computationally efficient method for finding socially optimal outcomes. For many optimization problems, however, it is computationally hard to find the socially optimal state (even if people truthfully report the preferences); consequently, the computer science literature has focused on studying so-called approximation algorithms, where we contend ourselves with finding a “close-to-optimal” outcome. Can we simply plug in such an approximation algorithm into the VCG mechanism? It turns out that the resulting mechanism no longer is DST. In fact (assuming the existence of “secure encryption schemes”), no computationally efficient “general-purpose” mechanisms (like VCG)—which simply use some underlying approximation algorithm as a “black-box”—can be DST [PS14, CIL12]. Consequently, the computer science literature has focused on directly designing DST mechanism that attempt that guarantee “close-to-optimal” outcomes: this study—referred to as algorithmic mechanism design—was initiated by Nisan and Ronen [NR99]. See [NRTV07] for a detailed study of this area.
1It may seem redundant for players to be associated with both a type as well as an individualized valuation function (which depends on the player identity ). Indeed, formally, it suffices to consider a single valuation function and assume that the type contains information about who the player is (e.g., let ), but this makes notation more cumbersome, and makes the notion of a type less intuitive.
2Let us point out that, while this way of defining utility is seemingly the most natural and simple way to capture the preferences of a player, there are some situations where the use of quasi-linear utilities are not appropriate—for instance, the fact that utility is linear in the payment does not capture the phenomena that people treat a price gap of $10 very differently depending whether they are buying, say, laundry detergent or a car. Nevertheless, in situations where the payments are “small” compared to the wealth of the players, this “quasi-linear” model seems reasonable.
3Note that we here are making use of the fact that is parametrized by the player identity to determine whether is actually receiving the object.