1. Homepage
  2. Programming
  3. ECS7002P - Artificial Intelligence in Games - Assignment 2: Reinforcement Learning of Frozen Lake

ECS7002P - Artificial Intelligence in Games - Assignment 2: Reinforcement Learning of Frozen Lake

Engage in a Conversation
UKQMULQueen Mary University of LondonECS7002PArtificial Intelligence in GamesReinforcement Learning of Frozen LakePythonMachine Learning

Assignment 2 CourseNana.COM

Artificial Intelligence in Games CourseNana.COM

In this assignment, you will implement a variety of reinforcement learning algorithms to find policies for the frozen lake environment. Please read this entire document before you start working on the assignment. CourseNana.COM


CourseNana.COM

1 Environment CourseNana.COM

The code presented in this section uses the NumPy library. If you are not familiar with NumPy, please read the NumPy quickstart tutorial and the NumPy broadcasting tutorial. CourseNana.COM

The frozen lake environment has two main variants: the small frozen lake (Fig. 1) and the big frozen lake (Fig. 2). In both cases, each tile in a square grid corresponds to a state. There is also an additional absorbing state, which will be introduced soon. There are four types of tiles: start (grey), frozen lake (light blue), hole (dark blue), and goal (white). The agent has four actions, which correspond to moving one tile up, left, down, or right. However, with probability 0.1, the environment ignores the desired direction and the agent slips (moves one tile in a random direction, which may be the desired direction). An action that would cause the agent to move outside the grid leaves the state unchanged. CourseNana.COM


CourseNana.COM

Figure 1: Small frozen lake CourseNana.COM

Figure 2: Big frozen lake CourseNana.COM

The agent receives reward 1 upon taking an action at the goal. In every other case, the agent receives zero reward. Note that the agent does not receive a reward upon moving into the goal (nor a negative reward upon moving into a hole). Upon taking an action at the goal or in a hole, the agent moves into the absorbing state. Every action taken at the absorbing state leads to the absorbing state, which also does not provide rewards. Assume a discount factor of γ = 0.9. CourseNana.COM

For the purposes of model-free reinforcement learning (or interactive testing), the agent is able to interact with the frozen lake for a number of time steps that is equal to the number of tiles. CourseNana.COM

Your first task is to implement the frozen lake environment. Using Python, try to mimic the interface presented in Listing 1. CourseNana.COM

The class EnvironmentModel represents a model of an environment. The constructor of this class receives a number of states, a number of actions, and a seed that controls the pseudorandom number generator. Its subclasses must implement two methods: p and r. The method p returns the probability of transitioning from state to next state given action. The method r returns the expected reward in having transitioned from state to next state given action. The method draw receives a pair of state and action and returns a state drawn according to p together with the corresponding expected reward. Note that states and actions are represented by integers starting at zero. We highly recommend that you follow the same convention, since this will facilitate immensely the implementation of reinforcement learning algorithms. You can use a Python dictionary (or equivalent data structure) to map (from and to) integers to a more convenient representation when necessary. Note that, in general, agents may receive rewards drawn probabilistically by an environment, which is not supported in this simplified implementation. CourseNana.COM


CourseNana.COM

2 Tabular model-based reinforcement learning CourseNana.COM

Your next task is to implement policy evaluation, policy improvement, policy iteration, and value iteration. You may follow the interface suggested in Listing 2. CourseNana.COM

Listing 2: Tabular model-based algorithms. CourseNana.COM

def policy evaluation (env , policy , gamma, theta , max iterations ): value = np.zeros(env.n states , dtype=np.float) CourseNana.COM

The function policy evaluation receives an environment model, a deterministic policy, a discount factor, a tolerance parameter, and a maximum number of iterations. A deterministic policy may be represented by an array that contains the action prescribed for each state. CourseNana.COM

The function policy improvement receives an environment model, the value function for a policy to be improved, and a discount factor. CourseNana.COM

The function policy iteration receives an environment model, a discount factor, a tolerance parameter, a maximum number of iterations, and (optionally) the initial policy. CourseNana.COM

The function value iteration receives an environment model, a discount factor, a tolerance parameter, a maximum number of iterations, and (optionally) the initial value function. CourseNana.COM


CourseNana.COM

3 Tabular model-free reinforcement learning CourseNana.COM

Your next task is to implement Sarsa control and Q-learning control. You may follow the interface suggested in Listing 3. We recommend that you use the small frozen lake to test your implementation, since these algorithms may need many episodes to find an optimal policy for the big frozen lake. CourseNana.COM

The function sarsa receives an environment, a maximum number of episodes, an initial learning rate, a discount factor, an initial exploration factor, and an (optional) seed that controls the pseudorandom number generator. Note that the learning rate and exploration factor decrease linearly as the number of episodes increases (for instance, eta[i] contains the learning rate for episode i). CourseNana.COM

The function q learning receives an environment, a maximum number of episodes, an initial learning rate, a discount factor, an initial exploration factor, and an (optional) seed that controls the pseudorandom number generator. Note that the learning rate and exploration factor decrease linearly as the number of episodes increases (for instance, eta[i] contains the learning rate for episode i). CourseNana.COM

Important: The ε-greedy policy based on Q should break ties randomly between actions that maximize Q for a given state. This plays a large role in encouraging exploration. CourseNana.COM


CourseNana.COM

4 Non-tabular model-free reinforcement learning CourseNana.COM

In this task, you will treat the frozen lake environment as if it required linear action-value function approxi- mation. Your task is to implement Sarsa control and Q-learning control using linear function approximation. In the process, you will learn that tabular model-free reinforcement learning is a special case of non-tabular model-free reinforcement learning. You may follow the interface suggested in Listing 4. CourseNana.COM

The class LinearWrapper implements a wrapper that behaves similarly to an environment that is given to its constructor. However, the methods reset and step return a feature matrix when they would typically return a state s. The a-th row of this feature matrix contains the feature vector φ(s, a) that represents the pair of action and state (s, a). The method encode state is responsible for representing a state by such a feature matrix. More concretely, each possible pair of state and action is represented by a different vector where all elements except one are zero. Therefore, the feature matrix has |S||A| columns. The method decode policy receives a parameter vector θ obtained by a non-tabular reinforcement learning algorithm and returns the corresponding greedy policy together with its value function estimate. CourseNana.COM

The function linear sarsa receives an environment (wrapped by LinearWrapper), a maximum number of episodes, an initial learning rate, a discount factor, an initial exploration factor, and an (optional) seed that controls the pseudorandom number generator. Note that the learning rate and exploration factor decay linearly CourseNana.COM

as the number of episodes grows (for instance, eta[i] contains the learning rate for episode i).
The function
linear q learning receives an environment (wrapped by LinearWrapper), a maximum number of episodes, an initial learning rate, a discount factor, an initial exploration factor, and an (optional) seed that controls the pseudorandom number generator. Note that the learning rate and exploration factor decay linearly CourseNana.COM

as the number of episodes grows (for instance, eta[i] contains the learning rate for episode i).
The Q-learning control algorithm for linear function approximation is presented in Algorithm 1. Note that this algorithm uses a slightly different convention for naming variables and omits some details for the sake of
CourseNana.COM

simplicity (such as learning rate/exploration factor decay). CourseNana.COM

Algorithm 1 Q-learning control algorithm for linear function approximation
Input: feature vector φ(s, a) for all state-action pairs (s, a), number of episodes N , learning rate α, exploration CourseNana.COM

factor ε, discount factor γ Output: parameter vector θ CourseNana.COM


CourseNana.COM

1:  θ0 CourseNana.COM

2:  for each i in {1,...,N} do CourseNana.COM

3:  s initial state for episode i CourseNana.COM

4:  for each action a: do CourseNana.COM

5:  Q(a) ?i θiφ(s, a)i CourseNana.COM

6:  end for CourseNana.COM

7:  while state s is not terminal do CourseNana.COM

8:  if with probability 1 ε: then CourseNana.COM

a arg maxa Q(a) CourseNana.COM

10:  else CourseNana.COM

11:  a random action CourseNana.COM

12:  end if CourseNana.COM

13:  r observed reward for action a at state s CourseNana.COM

14:  sobserved next state for action a at state s CourseNana.COM

15:  δrQ(a) CourseNana.COM

16:  for each action a: do CourseNana.COM

17:  Q(a) ?i θiφ(s, a)i CourseNana.COM

18:  end for CourseNana.COM

19:  δ δ + γ maxaQ(a) {Note: δ is the temporal difference} CourseNana.COM

20:  θθ+αδφ(s,a) CourseNana.COM

21:  ss CourseNana.COM

22:  end while CourseNana.COM

23:  end for CourseNana.COM

Important: The ε-greedy policy based on Q should break ties randomly between actions that maximize Q (Algorithm 1, Line 9). This plays a large role in encouraging exploration. CourseNana.COM


CourseNana.COM

5 Deep reinforcement learning CourseNana.COM

The code presented in this section uses the PyTorch library. If you are not familiar with PyTorch, please read the Learn the Basics tutorial. CourseNana.COM

In this task, you will implement the deep Q-network learning algorithm [Mnih et al., 2015] and treat the frozen lake environment as if it required non-linear action-value function approximation. In the process, you will learn how to train a reinforcement learning agent that receives images as inputs. You may follow the interface suggested in Listing 5. CourseNana.COM

The class FrozenLakeImageWrapper implements a wrapper that behaves similarly to a frozen lake environ- ment that must be given to its constructor. However, the methods reset and step return an image when they would typically return a state. This image is composed of four channels and is represented by a numpy.array of shape (4,h,w), where h is the number of rows and w is the number of columns of the lake grid. The first channel of this image is a h × w matrix whose elements are all zero except for the element that corresponds to the position of the agent, which has value one. The second channel of this image is a h × w matrix whose elements are all zero except for the element that corresponds to the start tile, which has value one. The third channel of this image is a h × w matrix whose elements are all zero except for the elements that correspond to hole tiles, which have value one. The fourth channel of this image is a h × w matrix whose elements are all zero except for the element that corresponds to the goal tile, which has value one. The method decode policy receives a neural network obtained by the deep Q-network learning algorithm and returns the corresponding greedy policy together with its value function estimate. CourseNana.COM

Finally, the function deep q network learning combines the previous classes to implement the deep Q-network learning algorithm. CourseNana.COM


CourseNana.COM

6 Main function CourseNana.COM

Your final implementation task is to write a program that uses all the algorithms that you have implemented for this assignment. Your main function should behave analogously to the function presented in Listing 6. Using the small frozen lake as a benchmark, find and render optimal policies and values using policy iteration, value iteration, Sarsa control, Q-learning control, linear Sarsa control, linear Q-learning, and deep Q-network learning. For marking purposes, if your main function does not call one of these algorithms, we will assume that it is not implemented correctly. CourseNana.COM

Get in Touch with Our Experts

WeChat WeChat
Whatsapp WhatsApp
UK代写,QMUL代写,Queen Mary University of London代写,ECS7002P代写,Artificial Intelligence in Games代写,Reinforcement Learning of Frozen Lake代写,Python代写,Machine Learning代写,UK代编,QMUL代编,Queen Mary University of London代编,ECS7002P代编,Artificial Intelligence in Games代编,Reinforcement Learning of Frozen Lake代编,Python代编,Machine Learning代编,UK代考,QMUL代考,Queen Mary University of London代考,ECS7002P代考,Artificial Intelligence in Games代考,Reinforcement Learning of Frozen Lake代考,Python代考,Machine Learning代考,UKhelp,QMULhelp,Queen Mary University of Londonhelp,ECS7002Phelp,Artificial Intelligence in Gameshelp,Reinforcement Learning of Frozen Lakehelp,Pythonhelp,Machine Learninghelp,UK作业代写,QMUL作业代写,Queen Mary University of London作业代写,ECS7002P作业代写,Artificial Intelligence in Games作业代写,Reinforcement Learning of Frozen Lake作业代写,Python作业代写,Machine Learning作业代写,UK编程代写,QMUL编程代写,Queen Mary University of London编程代写,ECS7002P编程代写,Artificial Intelligence in Games编程代写,Reinforcement Learning of Frozen Lake编程代写,Python编程代写,Machine Learning编程代写,UKprogramming help,QMULprogramming help,Queen Mary University of Londonprogramming help,ECS7002Pprogramming help,Artificial Intelligence in Gamesprogramming help,Reinforcement Learning of Frozen Lakeprogramming help,Pythonprogramming help,Machine Learningprogramming help,UKassignment help,QMULassignment help,Queen Mary University of Londonassignment help,ECS7002Passignment help,Artificial Intelligence in Gamesassignment help,Reinforcement Learning of Frozen Lakeassignment help,Pythonassignment help,Machine Learningassignment help,UKsolution,QMULsolution,Queen Mary University of Londonsolution,ECS7002Psolution,Artificial Intelligence in Gamessolution,Reinforcement Learning of Frozen Lakesolution,Pythonsolution,Machine Learningsolution,