Algorithmic Game Theory and Applications: Homework 1

1. Consider the following 2-player strategic game, G:

(3,6) (4,4) (4,0) (4, 3)

(3,7) (7,8) (4,6) (7,5)

(5,4) (8,5) (1, 2) (4, 3)

(6,7) (7,9) (5,4) (1, 3)

This is a “bimatrix”, to be and Player 2 is the column player. If the content of the bimatrix at row i and column j is the pair (a,b), then u1(i,j) = a and u2(i,j) = b.

(a) (36 points)

Compute all of the Nash equilibria (NEs) of this game G, together with the expected payoff to each player in each NE.

Explain why any profile x that you claim is an NE of G, is indeed an NE of G.

Furthermore, explain why there are no other (pure or mixed) NEs of G, other than the profile(s) you claim are NE(s) of G.

(b) (14 points) For a game H, let NE(H) denote the set of all (pure or mixed) NE’s of the game H. For a mixed strategy x1 ∈ X1 for player 1, define:

Recall πi,j denotes the j’th pure strategy of player i. Consider the 2-player normal form game G studied in part (a) of this question. Show that there exist a 3-player finite normal form game, G′, with pure strategy sets S1 = S2 = {1,2,3,4}, and S3 = {1,2} for the three players, such that:

NE(G′) = {(x1, x2, π3,g1(x1)) | (x1, x2) ∈ NE(G)}.

2. (a) Consider the 2-player zero-sum game given by the following payoff matrix for player 1 (the row player):

33962

78453

1 2 5 6 4

1 4 4 5 9

47783

Compute both the minimax value for this game, as well as a min- imax profile (NE), i.e. a pair of minmaximizer and maxminimizer strategies for players 1 and 2, respectively.

(You can, for example, use the linear programming solver package linprog in MATLAB, available on DICE machines, to do this. To run MATLAB, type “matlab” at the shell command prompt. For guidance on using the linprog package, see: http://uk.mathworks.com/help/optim/ug/linprog.html.)

(b) (30 points) Recall from Lecture 7 on LP duality, the symmetric 2- player zero-sum game, G, for which the (skew-symmetric) payoff matrix (in block form) for player 1 is:

Suppose that there exist vectors x′ ∈ Rn and y′ ∈ Rm, such thatAx′ <b,x′ ≥0,ATy′ >candy′ ≥0. (Notethetwo strict inequalities.) Prove that for the game G, every minmaxi- mizer strategy w = (y∗, x∗, z) for player 1 (and hence also every maxminimizer strategy for player 2, since the game is symmetric) has the property that z > 0, i.e., the last pure strategy is played with positive probability. (Recall that this was one of the missing steps in our sketch proof in the lecture that the minimax theorem implies the LP duality theorem.)

(Hint: Let w = (y∗, x∗, z) be a maxminimizer strategy for player 2 in the game G. Note that the value of any symmetric 2-player zero-sum game must be equal to zero. This implies, by the min- imax theorem, that Bw ≤ 0. Suppose, for contradiction, that z=0. WhatdoesthisimplyaboutAx∗ ,ATy∗,andbTy∗−cTx∗? Then if y∗ ̸= 0, show that this implies (y∗)T (Ax′ − b) < 0. In turn, show that it also implies (x∗)T (AT y′ − c) > 0. Use these and related facts to conclude a contradiction.)

3. Consider the following simple 2-player zero-sum games, where the pay- off table for Player 1 (the row player) is given by:

We can view this as a game where each player chooses “heads” (H) or “tails” (T), where the first strategy for each player is denoted H and the second strategy is denoted T.

(a) (10 points) First, what is the unique Nash equilibrium, or equivalently the unique minimax profile of mixed strategies for the two players, in this game? And what is the minimax value of that game?

(b) (40 points) Now, suppose that the two players play the same game you have chosen in part (a), against each other, over and over again, for ever, and suppose that both of them use the following method in order to update their own strategy after each round of the game.

i. At the beginning, in the first round, each player chooses ei- ther of the pure strategies, H or T, arbitrarily, and plays that.

ii. After each round, each player i accumulates statistics on how its opponent has played until now, meaning how many Heads and how many Tails have been played by the opponent, over all rounds of the game played thusfar. Suppose these num- bers are N Heads and M Tails.

Then player i uses these statistics to “guess” its opponents “statistical mixed strategy” as follows. It assumes that its op- ponent will next play a mixed strategy σ, which plays Heads with probability N/(N +M) and plays Tails with probability M/(N + M).

Under the assumption that its opponent is playing the “sta- tistical mixed strategy” σ, in the next round player i plays a pure strategy (H or T) that is a pure best response to σ.

If both H and T are a best response at any round, then player i can use any tie breaking rule it wish in order to determine the pure strategy it plays in the next round.

iii. They repeat playing like this forever.

Prove that, regardless how the two players start playing the game in the first round, the “statistical mixed strategies” of both play- ers in this method of repeatedly playing the game will, in the long run, as the number of rounds goes to infinity, converge to their mixed strategies in the unique Nash equilibrium of the game. You are allowed to show that this holds using any specific tie breaking rule that you want. Please specify the precise tie break- ing rule you have used. (It turns out that it hold true for any tie breaking rule. But some tie breaking rules make the proof easier than others.)

4. (a)

(40 points) One variant of the Farkas Lemma says the following: Farkas Lemma A linear system of inequalities Ax ≤ b has a solution x if and only if there is no vector y satisfying y ≥ 0 and yT A = 0 (i.e., 0 in every coordinate) and such that yT b < 0. Prove this Farkas Lemma with the aid of Fourier-Motzkin elim- ination. (Hint: One direction of the “if and only if” is easy. For the other direction, use induction on the number of columns of A, using the fact that Fourier-Motzkin elimination “works”. Note basically that each round of Fourier-Motzkin elimination can “eliminate one variable” by pre-multiplying a given system of linear inequalities by a non-negative matrix.)

(b) (10 points) Recall that in the Strong Duality Theorem one possible case (case 4, in the theorem as stated on our lecture slides) is that both the primal LP and its dual LP are infeasible. Give an example of a primal LP and its dual LP, for which both are infeasible (and argue why they are both infeasible).