0:00

Hi folks, it's Matt again.

Â And we're going to talk now about dominant strategy implementation.

Â So the idea that we're looking at is we have a society.

Â We have to make a decision.

Â And we're trying to design a mechanism

Â that is going to take people's preferences and give us outcomes.

Â And in particular let's start by looking at a situation where we've got

Â a full set of alternatives.

Â And any possible ranking over those things.

Â So for instance we have a set of candidates,

Â that we need to pick from, say four, five, six candidates, okay.

Â So we think about the voting rules that we've looked at earlier.

Â Now we've got to make a choice over these candidates.

Â And in particular what we'd like is when we ask people,

Â that they have no difficulties in deciding what they should announce.

Â They should just be truthful.

Â Okay, so we want dominant strategy implementation.

Â We want a mechanism that they want to tell us truthfully,

Â exactly what their preference rankings are over the alternatives and they don't

Â gain at all by trying to mix up the alternatives and manipulate the system.

Â So that's the idea.

Â 1:04

And in particular, let's think of, we've got our society with N,

Â individuals, finance set of outcomes, O and if we

Â begin to think about designing mechanisms in this world And we want one more.

Â Every agent has a dominant strategy for each preference.

Â We can invoke the revelation principle.

Â And the revelation principle tells us that if we do have an indirect mechanism

Â that has dominant strategies we can.

Â And just collapse that into a social choice function,

Â a direct mechanism where people just tell us directly their preferences and

Â then we give them the outcomes that would have gotten through the original mechanism

Â for each announcements of their preferences.

Â So if they had followed the strategies that they had, dominant strategies and

Â original one.

Â So this makes truth of dominant strategy so

Â the revelation principle will means that we can without loss of generality for

Â this exercise look at social choice functions directly, okay.

Â And so, now what we want to do is think about which social choice functions can

Â a society have which are going to be dominant strategy truthful and make sense.

Â Okay, well.

Â So, it's, this things are also known as non-manipulable.

Â Strategy-proofs, sometimes they're called straight forward mechanisms or

Â social choice functions.

Â And the important result in this area,

Â due to Allan Gibbard, Mark Satterthwaite in the early 1970s,

Â is going to say we're going to have a really hard time doing this

Â in a setting where people can have any possible ranking over the alternatives.

Â So let's have a look at that.

Â This is what's known as the Gibbard-Satterthwaite Theorem.

Â And it's another form of an impossibility theorem similar to what we saw in terms of

Â Arrow's theorem and the theorem.

Â So situations where we have a set of conditions we'd like to have and

Â the theorem says,

Â it's impossible to have this desirable set of conditions all at ones.

Â So what's the setting here?

Â We've got a social choice function,

Â we'd like to have one that's mapping all possible preferences.

Â So people have linear orders, they can have any strict ranking of our candidates.

Â And we're going to look at situations with a least three outcomes,

Â so we have at least three candidates to choose from.

Â And we're going to also look at a social choice function which is on to.

Â That means for every possible outcome,

Â there is a profile of preferences which gives you that outcome.

Â And that condition can be satisfied quite easily.

Â For instance, if you just required that your social choice function be unanimous.

Â So if all individuals prefer the same alternative,

Â we all say we love candidate a, then this is how you should pick candidate a.

Â If you put that minimal condition in, then indeed the,

Â in this domain of preferences, c is going to be on to.

Â So if we put those conditions then what does the theorem say?

Â The theorem says, that we're going to have the strategy in this condition,

Â truthful reporting of preferences is a dominant strategy for

Â every agent at every preference profile, if and only if c is dictatorial, okay.

Â So again, that means here, there is some particular individual I for

Â whom the choice function is just always their favorite alternative there

Â the thing that maximizes their preferences and regardless of what anybody else says.

Â So we just pick one individual.

Â We just listen to that person, okay.

Â Now in terms of the proof of this, it's clear that if we assign somebody to be

Â a dictator and don't listen to anybody else, that's going to be strategy proof.

Â Right, nobody else can make any difference,

Â doesn't matter what they say and the person who is a dictator

Â always wants to be truthful because they're getting their favored alternative.

Â The converse of this theorem is much more difficult, the part saying that if it's

Â strategy proves then it has to be dictatorial and this can be proven by

Â various means there are proofs that relate this back to Arrow's theorem and

Â show that there are similar conclusions

Â they can get in terms of the basic steps dilemmas that where approved in there.

Â There's a very elegant proof by Salvador

Â 5:27

which works off of individuals in pivotal in terms of changing their preferences and

Â changing the outcome and showing that you can't have too many individuals

Â being pivotal at once, if you are going to satisfy dominant strategies.

Â So there is a basic conflict of allowing people to make decisions and

Â having multiple people make decisions at the same time and making sure that

Â everybody can not manipulate things by announcing things falsely.

Â So were not going to explicitly offer that proof but

Â you can find various versions of this in the literature.

Â And we'll leave you to look at that directly.

Â What this means, is that any social choice function that

Â we write down that we're interested in, assuming that we don't want a dictatorial

Â functions are going to be manipulable on some situations.

Â So for in a voting setting and we have a set of candidates that we're looking at

Â and we have to pick among those candidates in any possible ordering of those as

Â possible on people's preferences regardless of what rule we use,we're

Â going to end up having people manipulate that rule in certain situations.

Â So, that's a very negative results in some sense, it's a damaging result and

Â it's different from arrows theorem because what this is doing

Â is really looking at the incentives that people have and

Â saying it's going to be difficult to be people to be truthful.

Â When we're looking at Arrow's theorem it was nothing said about whether people

Â are truthful or not.

Â The kinds of conclusions that were being reached on Arrow's Theorem

Â were just saying that some basic independence conditions and

Â Pareto conditions couldn't be satisfied by a social wealth ordering at the same time.

Â But presuming that you could see what people's preferences were as the input.

Â And this is saying that if you're going to make a choice, it's going to be difficult

Â to get people to reveal their preferences to you in a truthful manner.

Â One more thing about this, and this is something you can verify yourself.

Â 7:53

Then you going to have some more limited conclusions here that things are going to

Â be stick tutorial but only on the range of the outcome function, if at least,

Â if it still has at least three alternatives on it.

Â So there are some limitations on this.

Â But in terms of really getting around this kind of negative result.

Â And there's various ways we can think of doing this.

Â One, is to use a weaker form of implementation.

Â So here we were asking for dominant strategies for

Â all individuals at all preferences.

Â We could work instead for something like Bayes-Nash implementation,

Â Bayesian implementation where we ask people just to have it

Â be a best response to announce truthfully given that everyone else is.

Â And that'll open some doors for us.

Â That's going to be much more demanding in terms of an equilibrium concept and

Â the beauty of dominant strategies is people don't have to worry about what

Â other people are doing.

Â And they don't have to know or have beliefs about what's going on.

Â Bayes-Nash Structure,

Â Bayesian Nash structure is much more demanding in that sense.

Â 8:55

We could relax the assumption that people have arbitrary preferences and

Â look at more structured settings.

Â And indeed, when we do that, we're going to find a much more interesting and

Â positive results In terms of rules that we can apply so

Â more interesting rules I should say.

Â But we'll find more interesting rules that we can work with.

Â We've already seen one in fact, we've been voting on single peak domain.

Â So as we mentioned there, if you're asked for your preferences,

Â you have no reason to try and distort your preferences, the median and peak winds and

Â the only way to change that is to foot over and announced

Â something on the other side which will be worse for independent individual.

Â You can also take the max of the peaks.

Â You could do instead of median voting, have maximum voting.

Â So whoever has the maximum peak or the minimum peak or any order statistics.

Â So single-peaked domains are going to be settings where

Â what we've done is we've limited the set of preferences.

Â So not any ordering of candidates is allowed any longer.

Â And once you do that, then it's easier to design strategies for fools.

Â 10:21

on one mechanism that works in that situation is just to fix a price so

Â say you can trade the price at a price of 10.

Â So individual will say, at a price of 10, am I willing to trade or not?

Â And now there is a simple decision, you can't influence the price,

Â all you can do is influence whether you actually trade or not.

Â So you have incentive to say truthfully do you really want the trade or do you not.

Â And then trade if both of the agents want to trade.

Â So there you can design a dominant strategy mechanism.

Â There's going to be some inefficiencies associated with that.

Â You're not going to have trade occurring in all the circumstances that you'd like,

Â but at least you've got dominant strategies.

Â And we'll take a closer look at these things.

Â We're going to see a whole series of other types of rules.

Â We'll look at the Grove schemes and other kinds of schemes.

Â And when we look and narrowing the kinds of settings we're looking at with more

Â structure on preferences.

Â We'll have a whole series of interesting dominant strategy comparable mechanisms,

Â which are going to be present.

Â And we'll take a look at it,

Â particular kinds of environments where that's can be possible would be next up.

Â