DTS201TC Pattern Recognition
School of AI and Advanced Computing
Coursework (Groupwork)
23:59, 29th Oct.
DTS201TC AY 2023-2024
A comparative study of PR models
Assessment Task:
Compare multiple PR (Pattern Recognition) algorithms by implementing the classification task on a Remote Sensing dataset. The dataset download link will be provided on LMO.
Requirements:
-
You are expected to implement classification/clustering models , to which end, you need to understand and explain your models, manage and analyze the dataset and its features, implement the models, make evaluation and analysis.
-
The programming language should be Python.
-
You are free to use any PR/DL models. The percentage of DL models’ usage should
not exceed 50%.
-
The minimum number of implemented models is two.
-
The assessment includes both report and the codes.
-
Individual mark is decided by groupwork mark and peer assessment mark. The formula is shown below.
Final Grade = Peer Assessment Weight ∗ Student Contribution ∗ Group Grade + (1 − Peer Assessment Weight) ∗ Group Grade
where, the Student Contribution is calculated by LMO Peer Assessment activity.
-
Assessment
-
The second part of the group work marks(marking criteria 2) would be total marks of all models divided by the number of models.
-
If 0 models are submitted, the total marks would be 0.
-
Quality is valued more than quantity.
– Quality refers to whether the models are implemented well with good understanding and proper illustration in the report.
– Quantity refers to number of the models, length of report.
-
Code submitted should be able to run properly and the results should align with the report. At least one .ipynb should be included displaying the output of your models.
-
If a model’s implementation is referred to online resources, e.g., github, to a great extent, it should be clearly and formally noted in reference. Otherwise, it would be suspected as plagiarism, and therefore the marks for this model could be 0.
-
If a model’s implementation is referred to online resources, e.g., github, but you have contributions to it to improve the model, it should also be clearly and formally noted in reference. And you contributions should also be noted.
-
Page 2/5
DTS201TC AY 2023-2024
-
The baseline classification accuracy is 60%, the performance (efficiency/accu- racy) of a model will not be additionally evaluated as long as it is above the baseline. The choice of library is not within the evaluation.
-
The mark of the groupwork consists of 3 components, shown in detailed mark- ing rubrics below.
Marking Criteria:
(1). [40 marks] Investigating the dataset.
Rubrics
Table 1: Marking Rubric 1
Marks Details
Dataset description 15 Feature selection 10
Feature analysis 15
5 marks: dataset description
5 marks: visualization
5 marks: proper references
5 marks: explanation
5 marks: feature extraction methods
5 marks: investigate and experiment on the data
5 marks: possibility of using feature selection meth- ods
5 marks: demonstrate the features with fig- ures(numbers), plots or tables
(2). [40 marks] Description of the models, parameters, and evaluation on the performance over the model.
Rubrics
Description
Implementation
Evaluation
Table 2: Marking Rubric 2
Marks Details
10
20
10
5 marks: workflow
5 marks: training procedure description
5 marks: demonstrate results with figures(numbers) 5 marks: demonstrate results with plots or tables
5 marks: model description (e.g., theory, functional- ity, etc.) |
5 marks: include model parameters estimation pro- cedure |
5 marks: introduce the hardware you use (e.g., CPU, GPU, RAM, etc.) |
5 marks: codes can run properly and the results align with report |
Page 3/5
DTS201TC
AY 2023-2024
(3). [20 marks] Comprehensive analysis.
Rubrics
Discussion
Novelty
Table 3: Marking Rubric 3
Marks Details
10 5 marks: pros&cons of the models 5 marks: reason