Research Vignette: Simplicity, Optimality and Efficiency Tradeoffs in Economics

by Constantinos Daskalakis

In the beauty pageant of algorithms, contestants are judged based on their efficiency in the use of computational resources. Equally important is the quality of the solutions they compute, so algorithm design is typically thought to target an optimality-versus-efficiency tradeoff.

A feature of algorithms that is hard to quantify and trade against efficiency and optimality is their simplicity. In turn, this feature is quite important in several practical situations. A domain where simplicity creeps up as an important consideration, as we will explain next, is mechanism design. This subfield of economics and game theory takes an engineering approach to the design of incentive structures, called mechanisms. The goal of a mechanism is to motivate a group of strategic agents — who are interested in optimizing their own utilities — to choose actions that ultimately effectuate the designer’s objectives. For example, when designing a road network, a planner may want to minimize traffic by providing the right incentives (via tolls, traffic lights, and other traffic regulations) to individual drivers (who only care to minimize their own driving times) to choose routes that benefit the overall network traffic. The practical applications of mechanism design are abundant, ranging from economics to politics, and include the design of markets, auctions and online advertising platforms, the design of voting rules, the design of road and information networks, etc.

Everything else being equal, it is always desirable to adopt simple mechanisms. Complex mechanisms may lead to frustration for participants, who, unable to understand how to optimize their actions, may behave erratically and unpredictably, ultimately destroying system performance. At the same time, mechanism design often depends on data from market analysis, the process of scouting information about the strategic preferences of the agents that will participate in the designed mechanism. For instance, when designing auctions, it is useful to have information about how much the bidders value the items that are being sold. Whatever information it is that we have, we would not want to run the risk of overfitting it, so opting for simple mechanisms is often a good idea to achieve good generalization.

Ultimately, mechanism design targets a simplicity, optimality and computational efficiency tradeoff. The first workshop of Simons Institute’s Fall 2015 program on Economics and Computation was devoted to the study of this tradeoff in the context of mechanism design, and beyond. There was a series of interesting talks, by computer scientists and economists, capturing different angles on the tradeoff. In the course of the semester, several groups of program participants worked in this space, including Cai, Devanur and Weinberg, who provided a principled approach based on duality theory for designing simple, multi-item auctions that extract a good fraction of the optimal revenue [1]. This is an important task, as revenue-optimal mechanisms are known to be extremely complex [2,3].

In this Research Vignette, I outline one of my own projects in this space, which I find represents an exciting direction for future investigation. It is joint work with Vasilis Syrgkanis, who was a speaker in the aforementioned workshop, and it pertains to the well-studied problem of combinatorial auctions. This problem involves a seller withindivisible items, which she wishes to sell to a group ofbuyers. Every buyer is characterized by a valuation functionmapping each bundleof items to the buyer’s valuefor this bundle. The seller’s goal is to find a partitionof the items together with pricesso as to maximize the total welfare resulting from allocating bundle to each buyer and charging him. This allocation would induce total buyer utility and seller revenue , so the total welfare would simply be

If we knew all buyers’ valuations our problem would be a purely algorithmic one. For example, if all the ’s were additive functions, a welfare-optimizing allocation could be computed easily, by allocating each item to the bidder who values it the most. If the ’s are submodular functions the problem is more challenging, but it is known that a -approximation of the optimum welfare can be achieved in time polynomial in , even if we do not know the ’s but can make polynomially many in value queries to these functions [4].

When we do not know the buyer valuations and cannot query them directly, we need to interact with them in some way to find a good allocation. However, the buyers are strategic, aiming to optimize their own utility, In particular, they will misstate their preferences, if that increases their utility. So we need to design our allocation and price computation algorithms carefully so that a good allocation is found despite the agents’ strategizing against our algorithms. How much of the optimal welfare can we hope to guarantee?

A remarkable result from Economics is that welfare can be exactly optimized, as long as we have unbounded computational and communication resources, via the celebrated Vickrey-Clarke-Groves mechanism. However, this mechanism is overly demanding in terms of both computation and communication. It requires the agents to report their valuation functions to the mechanism, which is too expensive for most interesting types of valuations. It also requires the computational capability to optimize welfare exactly with respect to the reported valuations. Unfortunately, if we are only able to do the latter approximately, as is commonly the case for interesting types of valuations, the incentive properties of the VCG mechanism disappear, and no welfare guarantees can be made.

A problem that has bedeviled computer scientists since the birth of Algorithmic Game Theory has thus been whether there is an alternative to VCG that works even when our computational and communication resources are polynomially bounded. While there are special cases where the answer is yes, in general the answer is a disappointing no. As it turns out, the combination of strategic, computational and communication constraints can take a big toll on our ability to optimize welfare [5,6,7].

Yet many practical scenarios involve the allocation of items through simple mechanisms that are often not even centrally designed. Take eBay, for example, where several different items are sold simultaneously and sequentially via ascending price auctions; or sponsored search where several keywords are auctioned simultaneously and sequentially using generalized second price auctions. What welfare should we expect from such simple decentralized mechanisms?

To fix ideas, for the remainder of this essay let us study a simple auction format, called Simultaneous Second Price Auction (SiSPA). This auction asks every bidder to submit one bid for each item, and it allocates each item to whoever bids on it the most, charging him the second highest bid. This auction requires little communication, is trivial for the auctioneer to run, and can be described very easily. So from this point of view it is rather simple. At the same time, its simplicity may be a source of frustration for the bidders! A bidder who has a complex valuation over bundles of items is not able to communicate his complex preferences to the auction, as he may only bid on each item separately. At the same time, without knowledge of the other bidders’ valuations, it is unclear to him what the maximum bid on each item will be, so how much should he bid on the items? Going too strong, he is risking overpaying for the bundle he will end up getting. Being too cautious may get him very few items. Without information about each other’s valuations, it is impossible for the bidders to form beliefs about each other’s bids in order to decide how to bid. A fortiori, it is impossible for the auctioneer to predict how much welfare this auction will achieve.

One way out of the conundrum is to consider settings where a SiSPA is executed repeatedly with a fresh copy of each item in every iteration. Indeed, repeated participation in the same auction with the same competitors may help the bidders learn good strategies, even if they have zero information about each other in the beginning. Moreover, simple auctions that are executed repeatedly are quite common in practice as the examples of eBay and sponsored search illustrate.

So what can be said about the welfare achieved by repeated executions of SiSPAs when the bidders engage in learning?

The answer depends of course on the type of learning dynamics that the bidders use to update their bids in the course of the auction’s repetitions. A natural and widely studied type of dynamics, explained shortly, are the so-called no-regret learning dynamics, which include among other prevalent examples the multiplicative-weights updates method. Assuming bidders use no-regret learning, a recent line of work has established that with enough repetitions of a SiSPA, the average welfare becomes at least a quarter of optimum, even when the bidders’ valuations are subadditive [8]. Moreover, this holds for a host of other simple repeated auctions [9]. These results are remarkable given that subadditive valuations are quite permissive. In particular, the combination of simple repeated mechanisms with simple learning dynamics seems to provide a way out of the known intractability barriers, which suggests that there are no good tradeoffs between optimality and computational efficiency, even when bidder valuations are submodular, i.e. quite simpler than subadditive [5,6,7].

My work with Vasilis shows that unfortunately there is a catch [10]. Improving on earlier work by Cai and Papadimitriou [11], we show that no-regret learning may actually be computationally infeasible for the bidders to implement, even when their valuations are very simple functions. Intuitively, the reason is that, even though a repeated SiSPA is “simple,” the bidder is still facing a hard combinatorial problem at every step of the learning dynamics: he must decide, based on the empirical distribution of his opponents’ bids, on a bid for each of the items. Our result shows that, in a sense, running a simple auction pushed computational complexity from the auctioneer to the bidders.

While this is bad news, we complement our hardness result with a positive result, obtained by proposing a variant of no-regret learning, called no-envy learning. To illustrate the difference between the two concepts, no-regret learning requires the bidder’s average utility after participating in executions of a SiSPA to be within an additive error (which goes to zero with ) of the optimal average utility he would have achieved had he used the optimal-in-hindsight fixed bid vector across all executions. No-envy learning is inspired instead by Walrasian equilibrium. Having participated in iterations of the mechanism, suppose that the bidder looks back in time and computes the average price for each bundle of items (as determined by the bids placed by the other bidders in each iteration). Viewed as a price-taker, the bidder would want to achieve utility at least as large as what he would have achieved if he had purchased the best bundle. Our no-envy learning requirement is exactly that: after iterations of the mechanism, the bidder’s average utility is within of the utility he would have achieved had he won the optimal-in-hindsight fixed bundle of items across all executions.

We show that no-envy learning is a relaxation of no-regret learning, allowing for a broader set of outcomes. While it is broader, we also show that when bidder valuations are fractionally subadditive (a family of valuations that lies between submodular and subadditive), it still guarantees at least half of the optimal welfare. Importantly, no-envy learning is efficiently implementable. In particular, our notion of no-envy learning broadens the set of outcomes reachable through no-regret learning, but maintains their approximate optimality and endows them with computational tractability. In other words, simple auctions and no-envy learning offer a point on the tradeoff of simplicity, optimality and computational efficiency that breaks well-established barriers in combinatorial auctions!

In conclusion, targeting the simplicity, optimality, and computational efficiency tradeoff is an important endeavor in mechanism design. It will also be a tricky one, as intractability results are lingering left and right. In this landscape, the combination of simple repeated auctions and no-envy learning provide a new way forward, and I am excited about the developments in mechanism design that this combination may enable.

References:

  1. Y. Cai, N. Devanur and M. Weinberg. “A Duality Based Unified Approach to Bayesian Mechanism Design.” STOC 2016.
  2. C. Daskalakis, A. Deckelbaum and C. Tzamos. “Strong Duality for a Multiple-Good Monopolist.” Econometrica, to appear.
  3. C. Daskalakis. “Multi-Item Auctions Defying Intuition?” Newsletter of ACM SIGECOM, July 2015.
  4. J. Vondrak. “Symmetry and Approximability of Submodular Maximization Problems.” FOCS 2009.
  5. S. Dobzinski and J. Vondrak. “From Query Complexity to Computational Complexity.” STOC 2012.
  6. S. Dobzinski. “An Impossibility Result for Truthful Combinatorial Auctions with Submodular Valuations.” STOC 2011.
  7. S. Dughmi and J. Vondrak. “Limitations of Randomized Mechanisms for Combinatorial Auctions.” FOCS 2011.
  8. M. Feldman, H. Fu, N. Gravin, and B. Lucier. “Simultaneous Auctions are (Almost) Efficient.” STOC 2013.
  9. V. Syrgkanis and E. Tardos. “Composable and Efficient Mechanisms.” STOC 2013.
  10. C. Daskalakis and V. Syrgkanis. “Learning in Auctions: Regret is Hard, Envy is Easy.” arXiv preprint, 2015.
  11. Y. Cai and C. Papadimitriou. “Simultaneous Bayesian Auctions and Computational Complexity.” Proceedings of the Fifteenth ACM Conference on Economics and Computation (EC), 2014.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.