MSCI 222: OPTIMISATION GROUP COURSEWORK EXERCISE
You should describe every change in code or new algorithm in detail. You can use describe the changes in detail in your reports and add comments to your codes that you are submitting with your report. Furthermore. you can also use the example instances to describe each step of new algorithms you developed in detail.
Discuss the experimentation process by presenting the results from several runs for different data sets with various parameters (for Questions A4 and B). Describe in detail what experiments did you conduct and how did you make sure that how they are improving the performance of your algorithms for different data sets.
In both parts A and B, up to 15% of your mark will be awarded by evaluating the performance of your algorithms on different instances (some will be shared in Moodle). These feasible solutions should be using the resources as much as possible. In other words, one should only be able to add an item to the knapsack solutions provided by your code, by removing one of the items from the knapsack.
A set of input files are provided in Moodle. Each value is separated with commas and new lines. In the first line of the (0-1) Knapsack Problem input files, the number of items and weight limit are provided. Then, from the next line, every line contains the ID of the item (items are numbered with consecutive integers starting from zero), the profit of the item and the weight of the item. Then, every consecutive line contains the ID of the item (items are numbered with consecutive integers starting from zero), the profit of the item and the weights of the item for each dimension.
A set of example output files are provided in Moodle. The first line of each output file contains the profit and the weight (or the weights for each dimension in the multidimensional case) of the solution. The next line shows the ID of the items that are selected for the knapsack.
The (0-1) Knapsack Problem is an optimisation problem. The knapsack problem takes a set of items with their profits and weights, and a weight limit for the knapsack as parameters. The problem aims to find the set of items that maximises the sum of the profit of the items selected while satisfying the constraint that the sum of the weights of the selected items does not exceed the weight limit. There is already the code for a heuristic approach that uses a constructive and two local search algorithms, uploaded to Moodle (check the folder that belongs to your computer workshop group).
Question A1 (20 marks) One of the heuristic approaches used to solve the knapsack problems is the steepest ascent method. This method starts with a constructive algorithm and then runs one or many local search heuristics consecutively to improve the current solution. However, the code uploaded to Moodle does not do that. Check the code and correct it. Describe how you achieve this in your own words in detail
Question A2 (20 marks) In the code uploaded to Moodle, there are one constructive and two local search algorithms. Develop one more constructive and one more local search algorithms. In your codes, call the two constructive algorithms at the beginning of the heuristic and continue with the solution that gives a better result. Then, call your local search algorithms one after the other in every iteration. Describe your new constructive and local search algorithms in your own words.
Question A3 (15 marks) When there are multiple local search algorithms developed for the same problem, one of the approaches that works quite well is the Variable Neighbourhood Search (VNS). In variable neighbourhood search, there is an internal process that chooses better-performing neighbourhoods with a higher probability. However, it is also important to make sure that none of the local search algorithms’ selection probability is becoming zero.
Your heuristic should start with the same probability for the three local search algorithms. Then every time an algorithm is improving the best solution, the heuristic should increase the better performing local search algorithm’s probability. If the probability of the algorithm that found a better solution is p, increase the probability by 0.1×(1−??) and decrease the probabilities of the other methods by the same amount in total to make the sum of selection probabilities of the algorithms 1.
Here is an example showing how you should update your probabilities: If you have three algorithms, A1, A2 and A3, they will start with the same probability, ??1=??2=??3=1/3 =0.333 at the beginning of your heuristics. Then you should generate a random number and check what range it falls into. Then, if A2 improves the best solution by finding a solution better than the best solution so far before the call of A2, update the probability of A2 by 0.1×(1−0.333)= 0.067. This should make ??2=0.333 + 0.067=0.4, ??1=0.333–0.067/2=0.3 and ??3=0.333−0.67/2=0.3. Continue updating the selection of your local search algorithms and choose the algorithm with a random number until the end of the algorithm.
Question A4 (10 marks) In the steepest ascent algorithm, the current solution is updated only if the solution is improved by the next neighbourhood. However, this may result in being stuck in a local optimum. We may prevent this by using an algorithm that moves to inferior solutions with some probability in a structured way. In this question, you should do this by using the simulated annealing metaheuristic.
You should improve your code from Question A3 by applying simulated annealing to your algorithm. You should also conduct a few experiments with different initial temperatures (??0) and the rates of cooling (??) and use the set of parameters that you observed it works best for the example instances provided to you. A good initial temperature should allow the algorithm to move more inferior solutions at the beginning of the algorithm, but this should happen very rarely towards the end of the algorithm. The rate of cooling should be strictly between 0 and 1.
Part B (35 marks) There is also a variant of the (0-1) Knapsack Problem that considers multiple weight/resource constraints. This version of the knapsack problem is called the (0-1) Multidimensional Knapsack Problem. In this extension, each item has a predefined profit and multiple weights. In addition, to these, the maximum weight the knapsack can carry for each dimension is also predefined. The Multidimensional Knapsack Problem aims to maximise the sum of the profits of the selected items while ensuring that for each dimension (resource) the sum of weights does not violate the maximum weight limit defined for the same dimension.
There is already an additional file uploaded to Moodle for reading the input files and writing the solutions of the Multidimensional Knapsack Problem. Your task for this part of the coursework is to develop at least one constructive and two local search algorithms. You can use the part of the codes that are shared with you or developed by you in Part A of the task as the basis of your code and develop (similar or completely different) constructive and local search algorithms using them.