1. Homepage
2. Programming
3. COMP90054 AI Planning for Autonomy 2023 - Assignment 1 - Search

# COMP90054 AI Planning for Autonomy 2023 - Assignment 1 - Search

University of MelbourneCOMP90054AI Planning for AutonomyPythonPacmanMachine Learning

# COMP90054 AI Planning for Autonomy - Assignment 1 - Search

You must read fully and carefully the assignment specification and instructions detailed in this file. You are NOT to modify this file in any way.

The aim of this assignment is to get you acquainted with AI search techniques and how to derive heuristics in Pacman, as well as to understand how to model a problem with python.

In COMP90054, we will use ungrading for the coursework component of the subject. Even though we are required to provide a grade out of 100, throughout the assignments, subject staff will not assign grades to anyone. Instead, we will use a combination of techniques that uses each student’s reflection of their own learning to provide grades.

Who is better at assessing how much a student has leant: the student themselves, or a subject tutor/coordinator? I don’t imagine anyone would disagree that the student does. So why don’t we give students a say? For the coursework component of COMP90054 (assignments 1-3), the model that we employ cedes the power responsibility for monitoring and assessing progress to the students themselves.

Research shows that grades and rubrics have three reliable effects on students in a class:

1. they tend to think less deeply;
2. they avoid taking risks; and
3. they lose interest in the learning itself, We want to encourage all three of these things.

How will a student know if they are doing well? They will receive feedback on assessment throughout the semester. Feedback will be qualitative with broad ratings: needs further work; good; or outstanding; but these ratings will NOT be used for grading. Our detail feedback will be focused on your self-evaluation to facilitate and maximise your learning.

#### Collaboration

Assignments 1 can be completed individually or in pairs. We encourage pairs to work together to learn from each other; not to simple split the tasks for efficiency. But we will not monitor this – it is your responsibility. You can submit different solutions, we just want to encourage collaboration.

For the student works in pair, you must submit an individual short self evaluation SELFEV.md In addition, you can either submit the same coding solution (both need to submit in their own repo using tag), or submit different coding solution.

We encourage students to derive their own tests and share them with others to help with learning.

Since the purpose of assignments is to help you learn more, marks should not be assigned only on whether your code passes a few test cases but also on how much you have learnt.

Who knows better about whether you have learnt from this assignment other than yourself? We ask you to evaluate your work.

Each submission will contain a short self-reflection on what the student learnt, how they approached the tasks (including writing new tests) and will give the student a chance to argue that, even though they didn’t complete a task, they tried and learnt from it.

Your task contains programming excercises with increasing difficulty. This is where we give students control over how much assessment they want to complete or have the time to complete.

You must build and submit your solution using the sample code we provide you in this repository, which is different from the original UC Berkley code base.

• Please remember to complete the SELFEV.md file with your individual submission details (so we can identify you when it comes time to submit).

• You should only work and modify files search.py and searchAgents.py in doing your solution. Do not change the other Python files in this distribution.

• Your code must run error-free on Python 3.8. Staff will not debug/fix any code. Using a different version will risk your program not running with the Pacman infrastructure or autograder.

• Your code must not have any personal information, like your student number or your name. That info should go in the SELFEV.md file, as per instructions above. If you use an IDE that inserts your name, student number, or username, you should disable that.

• Assignment 1 FAQ is available to answer common questions you might have about Assignment 1 on ED

• Getting started on GitHub - the video below explains how to clone, git add, commit and push while developing your solution for this assignment:

#### Setting up the environment

• You can set up your local environment:

• You can install Python 3.8 from the official site, or set up a Conda environment or an environment with PIP+virtualenv.
• You need to install additional package (func_timeout) using: pip3 install func_timeout
• Alternatively, you can use docker:

#### Practice

To familiarize yourself with basic search algorithms and the Pacman environment, it is a good start to implement the tasks at https://inst.eecs.berkeley.edu/~cs188/fa18/project1.html, especially the first four tasks; however, there is no requirement to do so.

You should code your implementations only at the locations in the template code indicated by ***YOUR CODE HERE*** in files search.py and searchAgents.py, please do not change code at any other locations or in any other files.

#### Part 0 (0 mark, but critical)

This is a great way to test that you understand the submission instructions correctly, and how to get feedback from our hidden test-cases as many times as you want. Here are the steps:

• Please tag your solution with test-submission. If you are not familiar with tag, please check tag hints
• We are going to run your code in our server. You can check your result from this after a few minutes. This can test your code for part 1, 2 and 3.

#### Part 1 (3 marks)

Implement the Enforced Hill Climbing (EHC) algorithm discussed in lectures, using Manhattan Distance as the heuristic, by inserting your code into the template indicated by comment ***YOUR CODE HERE FOR TASK 1***. You can see the code location following this link: search.py

Note You don't have to implement Manhattan Distance, as this has already been implemented for you in the codebase. You will need to call the heuristic from your implementation of EHC. You should be able to test the algorithm using the following command:

python pacman.py -l mediumMaze -p SearchAgent -a fn=ehc,heuristic=manhattanHeuristic

Other layouts are available in the layouts directory, and you can easily create you own. When you use the autograder (see section cheking submission ), it will try to validate your solution by looking for an exact match with your output. The successors list are expected to be visited in the original order given by the API.

#### Part 2 (3 marks)

In this part we will help you prove to yourself that you have all the ingredients to pick up many new search algorithms, tapping to knowledge you acquired in the lectures and tutorials.

Bidirectional A* Enhanced (BAE*) is a search algorithm that searches in both directions, from the intial state to the goal state, and from the goal state towards the initial state, and keeps track of solutions found when the two directions meet. In Part 2, for simplicity, we assume there is only one goal state and no food, hence, progression and regression both search in the same state space, one uses the transition function forward, and the other backward. This algorithm has not been introduced in the lectures but it relies on ingredients which you already know:

• A*,
• the evaluation function $f(n)&space;=&space;g(n)&space;+&space;h(n)$ used to guide the expansion order in A*, and
• the definition of the transition function $f(s,a)$ to generate a graph implicitly while searching.

BAE* is similar to A*, but:

1. It has two open lists (lines 1-2, Alg. 1) to alternate (line 22) expanding nodes (line 9) from the forward $Open_f$ and backward $Open_b$ lists.
2. While searcing, it keeps a lower bound $L$ and upper bound $U$ to know when to stop the search, i.e. when $L\geq&space;U$ (line 15), the incumbent solution has been proved to be optimal. The lower bound is updated each time a node is selected for expansion (lines 5-9, Alg. 1), keeping track of the average $f(n)$ value in both directions (line 8). Note that if the heuristic is admissible, then the minimum $f(n)$ value in the open lists represent a lower bound on the optimal solution cost.
3. The upperbound is only updated if the current node $n$ exists in the open list of the opossite direction $Open_{\bar{x}}$, which means that the two directions meet through node $n$, and (the cost of the path form the initial state to $n$) + (the cost of the path from $n$ to the initial state of the opposite direction) is smaller than the best solution known so far, i.e. $U$ (line 11). If that's the case, keep the solution and update the upperbound (lines 12-13).
4. The priority used to order nodes in the openlists is slightly different than A*. Given that every time a node is chosen for expansion we know that its $g_x(n)$ value represents the optimal cost to node $n$ in direction $x$, then we can characterize and correct the innacuracy of the heuristic $h_{\bar{x}}$ used in the opposite direction as $d_x(n)&space;=&space;g_x(n)&space;-&space;h_{\bar{x}}$. This value is added to $f_x(n')$ and used to set the priority of each generated node $n'$ (lines 18-20).

This algorithm is taken from the paper presented at the Symposium on Combinatorial Search, on the 22nd of July, 2022. Bidirectional search is currently a hot research topic given recent theoretical results and simple practical algorithms such as BAE*. The paper that introduced these results received a prestigious best paper award in a top conference in AI. This context alone should motivate you: you are literally implementing a cutting-edge search algorithm.

• Checking membership of a node in the opposite open (line 11), can be a computationally expensive operation (worst case linear in the size of the open, which in turn is exponential as the depth of the search increases). This is specially relevant as this line is executed for every expanded state. Think back about your Data Structures course, and use an auxiliary data structure to implement the membership test efficiently for this assignment. Otherwise, BAE will be way slower than A, even if it expands less nodes.
• To understand what the search is doing, make sure that you understand the successor generators in both directions. In Backward search, given an action $a$ and state $s$, you need to generate the state $s'$ from which the application of $a$ resulted in state $s$.
• The function getBackwardsSuccessors already reverses the action for backwards search.
• The function getGoalStates returns a list of all possible goal states.

Implement the BAE* algorithm discussed above by inserting your code into the template indicated by comment ***YOUR CODE HERE FOR TASK 2***, you can view the location at this link: search.py#L169. You should be able to test the algorithm using the following command:

python pacman.py -l mediumMaze -p BidirectionalSearchAgent -a fn=bae,heuristic=manhattanHeuristic,backwardsHeuristic=backwardsManhattanHeuristic

Other layouts are available in the layouts directory, and you can easily create your own. The autograder will seek for exact match of the solution and the number of node expansions. The successors list are expected to be visited in the original order given by the API. If your node expansion number is incorrect, please try to reverse it. An example is given as follows:

succs = problem.getSuccessors(state)
succs.reverse()

#### Part 3 (4 marks)

This part involves solving a more complicated problem. You will be able to model the problem, using the BAE* algorithm from part 2 and design a heuristic function that can guide the search algorithm.

Just like in Q7 of the Berkerley Pacman framework, you will be required to create an agent that eats all the food (dots) in a maze.

In order to implement this, you should create a new problem called BidirectionalFoodSearchProblem. Some of the variables are listed in the comments and the initialization. You will need to:

1. Design a way to represent the states of the problem.
2. Return the initial state through getStartState function.
3. Have getGoalStates to return a list of all possible goal states.
4. Implement your transition function in getSuccessors, which should return a list of tuples that contains (next state, action, cost).
5. Implement getBackwardsSuccessors function to search in backwards.
6. Implement two suitable heuristics, bidirectionalCapsuleProblemHeuristic and bidirectionalCapsuleProblemBackwardsHeuristic.

Make sure your heuristic is admissible and consistent, as we don't check for reopening, and the optimality guarantees would be lost because the assumption on the optimality of $g_x(n)$ would be wrong when errors $d(n)$ are computed.

You may choose to implement other helper classes/functions.

You should insert your code into the template indicated by the comments ***YOUR CODE HERE FOR TASK 3***, you can view the locations at these links: 9 tags in searchAgents.py.

• It is important to make sure the transition from getBackwardsSuccessors is the correct reverse of getSuccessors. Forward and Backward are not equivalent as in Part 2.
• As there is not a single way to model this problem, we would not be checking node expansion numbers. However, please make sure that your code runtime is not too long. The autograder scripts will allow a time budget for each test case as 3 times the runtime of our staff's code with the blind heuristic that assigns each state a value of 0.
• Although a heuristic function is not compulsory, however, a good heuristic function is going to improve your code's running time. We would recommend to try different heuristics to compare the running time and node expansion numbers. Start with the easiest one.
• Running time for staff's code with zero heuristic on the trickySearch is 66 second and on the mediumCorners is 30 seconds. This will give you a good guide for comparing the expected performance of your solution.

You should be able to test your program by running the following command (in one line):

python pacman.py -l smallCorners -p BidirectionalFoodSearchAgent -a fn=bae,heuristic=bidirectionalFoodProblemHeuristic,backwardsHeuristic=bidirectionalFoodProblemBackwardsHeuristic

The autograder seeks an optimal solution length within the time budget for each test cases. In addition, please make sure your heuristic is admissible and consistent, otherwise you should not assign full marks for this part due to not finding the optimal plan.

You will see in first person the balance between 1) how informed you make your heuristic (it should expand less nodes in general), and 2) the overall runtime. As you can see, sometimes it may be preferable to have a cheaper less informed heuristic, even if you end up expanding more nodes.