1. Homepage
  2. Programming
  3. COMP2001 AIM Project: SOLVING THE MINIMUM SET COVER PROBLEM USING A SELECTION HYPER-HEURISTIC

COMP2001 AIM Project: SOLVING THE MINIMUM SET COVER PROBLEM USING A SELECTION HYPER-HEURISTIC

Engage in a Conversation
COMP2001CC++JavaPythonArtificial Intelligence MethodsUKUniversity of Nottingham

COMP2001 (AIM) Project 2023 SOLVING THE MINIMUM SET COVER PROBLEM USING A SELECTION HYPER-HEURISTIC CourseNana.COM

  1. Description of the problem The minimum set cover problem (MIN-SET-COVER) is a well-known and well-studied NP-hard combinatorial optimisation problem. Problem description: Given a set of n elements; X, let 𝑆1,𝑆2,…,𝑆𝑚 be subsets of X, i.e., each 𝑆𝑖⊆ X, and assume that every item in X appears in some set, i.e. ⋃𝑆𝑖=𝑋𝑖. A set cover of X with 𝑆 is a set M ⊆ {1, 2, . . . , m}, such that, ⋃𝑆𝑖=𝑋𝑖∈𝑀. The MIN-SET-COVER to a problem instance is a set cover M of minimum cardinality (size) (that is, min|M|). So, formally MIN-SET-COVER problem can be described as minimise |M| (1) such that ⋃𝑆𝑖=⋃𝑆𝑖𝑚𝑖=1𝑖∈𝑀 (2) where X consists of n elements, 𝑆𝑖⊆ X, and M⊆ {1, 2, . . . , m}. Example: Assume that you have a set of software developers: Leia, Rey, Luke, and Yoda. Each programmer knows different sets of programming languages. Leia knows C++ and Java (𝑆1={2, 3}). Rey knows Java and Perl (𝑆2={3, 4}). Luke knows C, Java, and Python (𝑆3={1, 3, 5}). Yoda knows C++ and Perl (𝑆4={2, 4}). Your job is to hire a team of programmers. You are given two requirements: (i) there has to be at least one person on the team who knows each language (i.e., C, C++, Java, Perl, and Python), and (ii) your team should be as small as possible (maybe your team is running on a tight budget). The elements in X is 5 programming languages C, C++, Java, Perl, and Python (X ={1, 2, 3, 4, 5}, where n=5). Each programmer represents a subset of X (that is, 𝑆𝑖⊆ X and there are m=4 such subsets). Your job is to find the minimum number of programmers (i.e., the minimum number of sets) such that every language is covered. For this problem instance, Leia, Rey, and Luke (<1110>) form a set cover of size 3. However, Luke and Yoda (<0011>) form a set cover of size 2, which is optimal, i.e. the solution to this MIN-SET-COVER problem instance. Representing a MIN-SET-COVER Instance A bipartite graph is a set of graph vertices decomposed into two disjoint sets such that no two graph vertices within the same set are adjacent. A set cover problem can be represented as a (directed) bipartite graph, with the subsets 𝑆𝑖 represented on one side, the base elements (in X) represented on the other side, and edges only go from subsets to base elements. The following bipartite graph represents the problem instance in the example above.

Page 2 of 6 Greedy Algorithms There are different greedy heuristic methods for solving the problem. However, they often do not provide the optimal solution to a given instance. You can find below two simple constructive heuristics example of how they operate. You can design more elaborate greedy heuristics. Greedy Heuristic#A Example: Include the next subset 𝑆𝑖 as long as the union of all selected subset are in 𝑋, until all elements in 𝑋 are covered. For example, having 𝑋:{ 1, 2, 3, 4, 5} and <𝑆1,𝑆2, 𝑆3,𝑆4>, greedy algorithm#A will include Leia: 𝑆1={2, 3} first, and then Rey: 𝑆2={3, 4}, followed by Luke: 𝑆3={1, 3, 5} covering all events in 𝑋, resulting with a solution (<1110>) with the objective value of 3, that is the size of M. However, the optimal solution consists of 2 subsets; Luke and Yoda. Greedy Heuristic#B Example: Sort the 𝑆𝑖in decreasing size (random choice for breaking ties) and form a list of subsets. Pick the next subset 𝑆𝑖 from the list that covers most of the elements in 𝑋 as a part of the solution, remove 𝑆𝑖 from the list and the elements in 𝑆𝑖 from 𝑋, and go to the next subset. Repeat this process until all elements in 𝑋 are covered. For example, having 𝑋:{ 1, 2, 3, 4, 5} and sorting 𝑆𝑖 in decreasing size <𝑆3:3,𝑆1:2, 𝑆4:2,𝑆2:2>, greedy algorithm#B will include 𝑆3 first in the solution; M ={3}, after removal of that subset from the list and its elements in 𝑋, we will have 𝑋:{ 2, 4} and <𝑆1, 𝑆4,𝑆2>, where all subsets have the same size of 2, so we need to make a pass over them. 𝑆1 covers only one element in 𝑋, going to the next subset 𝑆4 covers the remaining two elements, so 𝑆4 is included in the solution; M ={3, 4}, and algorithm stops as all elements in 𝑋 is covered, resulting with a solution (<0011>) having the objective value of |M|=2. This algorithm finds the perfect solution to the problem instance. 2. Task Design and implement a learning hyper-heuristic optimisation approach combining a learning heuristic selection with an elaborate move acceptance (e.g., simulated annealing, great deluge, late acceptance), and embedding multiple operators for each type (see Section 2.6) from scratch for solving the Minimum Set Cover Problem using Java. You should not make use of any APIs that are not included in the standard Java libraries. You are fully free to choose and implement the solution approach and its algorithmic components as you see fit. A hyper-heuristic using random choice for heuristic selection (indicating no learning) and/or a simple move acceptance method will be accepted, however will be penalised. All projects will be assessed out of 100. 𝑺𝟏 𝑺𝟐 𝑺𝟑 𝑺𝟒 4 2 3 1 5 Languages Programmers Leia Rey Luke Yoda C C++ Java Perl Python CourseNana.COM

Page 3 of 6 2.1. Initial test instances The following MIN-SET-COVER benchmark instances are provided for testing purposes. Please check Moodle for download. Instance ID d1_50_500 d2_50_500 d3_511_210 d4_2047_495 2.2. Instance reader – Input File Format for the Problem Instances You should implement an instance reader code to input a given problem instance. All problem instances will be provided in the same format. File name: instanceName_m_n.txt instanceName: name of the instance of MIN-SET-COVER problem m: number of subsets n: number of elements in X Example m n s1_size // size of the subset#1 s1_1 s1_2 s1_3… // elements in subset#1 s1_16 s1_17 … : : si_size // size of the subset#i si_1 si_2 si_3… // elements in subset#i : : sm_size // size of the subset#m sm_1 sm_2 sm_3… // elements in subset#m sm_16 sm_17 … Warning: Please do not rely on the name of the file for extracting the number of subsets and elements in X for a given instance. You should be using the content of the file instead. The filename identifies a problem instance. 2.3. Output Your code should output the following information and files when executed: • Display the best solution as a binary string after its objective value (profit) separated by a newline via the standard output for each trial. E.g.: Assuming a problem instance with n=30, m=5 and running the algorithm for 2 trials, the output could be as follows (assuming feasible solutions) CourseNana.COM

Page 4 of 6 Trial#1: // this indicates the ID of the trial – that is the result from the first trial 2 // this is the objective value found after running the algorithm till termination 10010 // this is the binary solution indicating that first and fourth subsets are selected Trial#2: 3 11010 • You should define a parameter, maxNumberOf indicating how many pairs of objective values you will be recording when you run your algorithm for multiple trials for output. If you are using a single-point-based search method, record the best and current objective values at each step and print it into a textual file for the first maxNumberOf iterations for each trial. E.g.: instanceName_m_n_trialID_output.txt file contains the following Best-1 current-1 Best-2 current-2 : : Best-k current-k // best and current (worst) profit values at the kth step : : best-maxNoOfEntries current-maxNoOfEntries 2.4. Representation of Candidate Solutions Use any suitable encoding scheme in your implementation. Normally, a binary array/string is used to represent candidate solutions, assuming that perturbative heuristics/operators will be implemented. For example, in a binary string <𝑥1,…,𝑥𝑖,…,𝑥𝑚>, 𝑥𝑖∈{0,1} indicates whether the subset 𝑆𝑖 is included (1) or not (0) in the solution. 2.5. Solution Initialisation You can randomly initialise a solution or use a constructive heuristic of your choice (e.g., greedy heuristic above). 2.6. Heuristics/Operators Appropriate set of heuristics/operators based on mutation including perturbative (e.g., bit flip) and ruin and recreate methods, hill climbing (e.g., steepest/best gradient, next/first gradient, Davis’s bit hill climbing) and crossover (e.g., one point crossover, uniform crossover) should be implemented depending on the choice of your solution approach. All mutation operators should be parameterised using intensity of mutation (IOM), and the hill climbing operators using depth of search (DOS). There is no need parameterising crossover operators. The parameter settings for standard operators are discussed below. If you design a non-standard operator, it is up to you to decide on how the parameter setting is used to control the operator. 2.6.1. Intensity of Mutation (IOM) The intensity of mutation setting should be used to control the number of 𝑡𝑖𝑚𝑒𝑠 that a mutation (e.g. bit flip) is performed using the following mapping. 1. 0.0 ≤ 𝐼𝑂𝑀 < 0.2 ⇒ 𝑡𝑖𝑚𝑒𝑠 = 1 2. 0.2 ≤ 𝐼𝑂𝑀 < 0.4 ⇒ 𝑡𝑖𝑚𝑒𝑠 = 2 CourseNana.COM

Page 5 of 6 3. 0.4 ≤ 𝐼𝑂𝑀 < 0.6 ⇒ 𝑡𝑖𝑚𝑒𝑠 = 3 4. 0.6 ≤ 𝐼𝑂𝑀 < 0.8 ⇒ 𝑡𝑖𝑚𝑒𝑠 = 4 5. 0.8 ≤ 𝐼𝑂𝑀 < 1.0 ⇒ 𝑡𝑖𝑚𝑒𝑠 = 5 6. 𝐼𝑂𝑀 = 1.0 ⇒ 𝑡𝑖𝑚𝑒𝑠 = 6 Example: If you implement random bitFlip as the basis mutation operator which chooses a bit randomly in a given candidate solution and flips its value, and assuming 𝐼𝑂𝑀 = 0.25, then bitFlip should be invoked 2 𝑡𝑖𝑚𝑒𝑠, successively. 2.6.2. Depth of Search (DOS) The depth of search setting should be used to control how many times a complete pass on a solution or a move considered for an accept/reject decision should be repeated should be carried out in a hill climbing operator using the following mapping. 1. 0.0 ≤ DOS < 0.2 ⇒ 𝑡𝑖𝑚𝑒𝑠 = 1 2. 0.2 ≤ DOS < 0.4 ⇒ 𝑡𝑖𝑚𝑒𝑠 = 2 3. 0.4 ≤ DOS < 0.6 ⇒ 𝑡𝑖𝑚𝑒𝑠 = 3 4. 0.6 ≤ DOS < 0.8 ⇒ 𝑡𝑖𝑚𝑒𝑠 = 4 5. 0.8 ≤ DOS < 1.0 ⇒ 𝑡𝑖𝑚𝑒𝑠 = 5 6. DOS = 1.0 ⇒ 𝑡𝑖𝑚𝑒𝑠 = 6 Random mutation hill climbing: A move considered for an accept/reject decision should be repeated for (𝑡𝑖𝑚𝑒𝑠 x n) times. Steepest/best gradient, Next/first gradient, Davis’s hill climbing: 𝑡𝑖𝑚𝑒𝑠 represents the number of complete passes over a solution. If there is no improvement in a pass, the algorithm should be terminated. 2.7. Objective Function and Delta Evaluation You can use Equation (1), as a part of the objective function in your approach for evaluating a candidate solution, if you can ensure that any generated new solution is feasible, that is Equation (2) is satisfied. For example, given 𝑚=4 and given 𝑛=5 (that is 𝑋={1,2,3,4,5}) with the associated subsets: 𝑆1={2, 3}, 𝑆2={3, 4}, 𝑆3={1, 3, 5}, 𝑆4={2, 4}, the solutions <1000> and <1101> are infeasible solutions since 𝑆1 ={2,3} and 𝑆1 ⋃𝑆2⋃𝑆4={2,3,4} do not cover 𝑋. The objective values of 1 and 3, respectively, for those solutions are not valid as the Equation (2) is violated, hence Equation (1) cannot be used to assess any infeasible solution. However, <0011> is a feasible solution with the objective value of 2, since 𝑆3 ⋃𝑆4={1,2,3,4,5}. If you plan to allow infeasible solutions during the search process in your approach, then design a modified minimising objective function which can evaluate infeasible solutions producing, perhaps a a value which is worse than the worst possible feasible solution (e.g., including all subsets), and taking into account the degree of infeasibility of a solution. So, <1101> could have a “better” objective value than <1000> based on your minimising objective function, since it is covering more elements in 𝑋. Whereever possible, delta evaluation should be implemented to make the objective value calculations faster after applying a heuristic/operator. Delta evaluation exploits that fact that including an excluded subset in (e.g., 0 𝑏𝑒𝑐𝑜𝑚𝑖𝑛𝑔 1), increases the objective value by 1, or excluding an included subset (e.g., 1 𝑏𝑒𝑐𝑜𝑚𝑖𝑛𝑔 0), decreases the objective value by 1, assuming that the current and resultant solutions are feasible. CourseNana.COM

Get in Touch with Our Experts

WeChat WeChat
Whatsapp WhatsApp
COMP2001代写,C代写,C++代写,Java代写,Python代写,Artificial Intelligence Methods代写,UK代写,University of Nottingham代写,COMP2001代编,C代编,C++代编,Java代编,Python代编,Artificial Intelligence Methods代编,UK代编,University of Nottingham代编,COMP2001代考,C代考,C++代考,Java代考,Python代考,Artificial Intelligence Methods代考,UK代考,University of Nottingham代考,COMP2001help,Chelp,C++help,Javahelp,Pythonhelp,Artificial Intelligence Methodshelp,UKhelp,University of Nottinghamhelp,COMP2001作业代写,C作业代写,C++作业代写,Java作业代写,Python作业代写,Artificial Intelligence Methods作业代写,UK作业代写,University of Nottingham作业代写,COMP2001编程代写,C编程代写,C++编程代写,Java编程代写,Python编程代写,Artificial Intelligence Methods编程代写,UK编程代写,University of Nottingham编程代写,COMP2001programming help,Cprogramming help,C++programming help,Javaprogramming help,Pythonprogramming help,Artificial Intelligence Methodsprogramming help,UKprogramming help,University of Nottinghamprogramming help,COMP2001assignment help,Cassignment help,C++assignment help,Javaassignment help,Pythonassignment help,Artificial Intelligence Methodsassignment help,UKassignment help,University of Nottinghamassignment help,COMP2001solution,Csolution,C++solution,Javasolution,Pythonsolution,Artificial Intelligence Methodssolution,UKsolution,University of Nottinghamsolution,