0. Aims:
1. Demonstrate effective use of applying the software development to build full-stack end-user applications
2. Demonstrate effective use of static testing, dynamic testing, and user testing to validate and verify software systems
3. Understand key characteristics of a functioning team in terms of understanding professional expectations, maintaining healthy relationships, and
managing conflict.
4. Demonstrate an ability to analyse complex software systems in terms of their data model, state model, and more.
5. Understand the software engineering life cycle in the context of modern and iterative software development practices in order to elicit
requirements, design systems thoughtfully, and implement software correctly.
6. Demonstrate an understanding of how to use version control, continuous integration, and deployment tooling to sustainably integrate code
from multiple parties
1. Overview
UNSW needs a change in business model. Revenue has been going down, despite the absolutely perfect MyExperience feedback.
When doing some research, UNSW found the video game industry industry, particularly mobile games like Candy Crush earn over $500 million each year.
UNSW has tasked me (Hayden), and my army of COMP1531 students with investigating the possibility of recreating this game, for UNSW profit. Only one thing stands in the way...
Microsoft recently bought Candy Crush, and they also own UNSW's only communication platform, Microsoft Teams!
If we want to make a Candy Crush competitor, we're going to have to remake Teams first - or those Microsoft spies will shut us down before we even
begin development!
The 22T2 cohort of COMP1531 students will build the backend Javascript server for a new communication platform, UNSW Treats (or just Treats for short). I plan to task future COMP6080 students to build the front-end for Treats, something you won't have to worry about.
UNSW Treats is the questionably-named communication tool that allows you to share, communicate, and collaborate virtually without intervention from Microsoft spies.
I have already specified a common interface for the frontend and backend to operate on. This allows both courses to go off and do their own development and testing under the assumption that both parties will comply with the common interface. This is the interface you are required to use.
The specific capabilities that need to be built for this project are described in the interface at the bottom. This is clearly a lot of features, but not all of them are to be implemented at once.
Good luck, and please don't tell anyone at Microsoft about this. ? (For legal reasons, this is a joke).
2. Iteration 0: Getting Started
Complete!
3. Iteration 1: Basic Functionality and Tests
Complete!
4. Iteration 2: Building a Web Server 4.1. Task
In this iteration, more features were added to the specification, and the focus has been changed to HTTP endpoints. Most of the theory surrounding iteration 2 is covered in week 4-5 lectures. Note that there will still be some features of the frontend that will not work because the routes will not appear until iteration 3. There is no introductory video for iteration 2.
In this iteration, you are expected to:
1. Make adjustments to your existing code as per any feedback given by your tutor for iteration 1.
2. Migrate to Typescript
o Change .js file extension to .ts .
Run npm run tsc and incrementally fix all type errors.
o Either choose to change one file at a time, or change all file extensions and use // @ts-nocheck at the beginning of select files to disable checking on that specific file, omitting errors.
3. Implement and test the HTTP Express server according to the entire interface provided in the specification.
Part of this section may be automarked.
Your implementation should build upon your work in iteration 1, and ideally your HTTP layer is just a wrapper for underlying functions you've written that handle the logic, see week 4 content.
Your implementation will need to include persistence of data (see section 4.7). Introduce tokens for session management (see 6.7).
You can structure your tests inside a /tests folder (or however you choose), as long as they are appended with .test.js . For this iteration and iteration 3 we will only be testing your HTTP layer of tests. You may still wish to use your iteration 1 tests and simply wrap up them - that is a design choice up to you. An example of an HTTP test can be found in section 4.4.
You do not have to rewrite all of your iteration 1 tests as HTTP tests - the latter can test the system at a higher level. For example, to test a success case for message/send via HTTP routes you will need to call auth/register and channels/create ; this means you do not need the success case for those two functions seperately. Your HTTP tests will need to cover all success/error conditions for each endpoint, however.
4. Ensure your code is linted to the provided style guide
eslint should be added to your repo via npm and then added to your package.json file to run when the command npm run lint is run. The provided .eslint file is very lenient, so there is no reason you should have to disable any additional checks. See section 4.5 below for instructions on adding linting to your pipeline.
You are required to edit the gitlab-ci.yml file, as per section 4.5 to add linting to the code on master . You must do this BEFORE merging anything from iteration 2 into master , so that you ensure master is always stable.
5. Continue demonstrating effective project management and effective git usage
o You will be heavily marked for your use of thoughtful project management and use of git effectively. The degree to which your team works effectively will also be assessed.
o As for iteration 1, all task tracking and management will need to be done via the GitLab Issue Board or another tracking application approved by your tutor.
o As for iteration 1, regular group meetings must be documented with meeting minutes which should be stored at a timestamped location in your repo (e.g. uploading a word doc/pdf or writing in the GitLab repo wiki after each meeting).
o As for iteration 1, you must be able to demonstrate evidence of regular standups.
o You are required to regularly and thoughtfully make merge requests for the smallest reasonable units, and merge them into master .
A frontend has been built that you can use in this iteration, and use your backend to power it (note: an incomplete backend will mean the frontend cannot work). You can, if you wish, make changes to the frontend code, but it is not required. The source code for the frontend is only provided for your own fun or curiosity.
As part of this iteration it is required that your backend code can correctly power the frontend. You should conduct acceptance tests (run your backend, run the frontend and check that it works) prior to submission.
In this iteration we also expect for you to improve on any feedback left by tutors in iteration 1.
4.2. Running the server
To run the server you can the following command from the root directory of your project:
This will start the server on the port in the src/server.ts file, using ts-node .
If you get an OSError stating that the address is already in use, you can change the port number in config.json to any number from 1024 to 49151. Is
it likely that another student may be using your original port number.
4.3. Implementing and testing features
You should first approach this project by considering its distinct "features". Each feature should add some meaningful functionality to the project, but still be as small as possible. You should aim to size features as the smallest amount of functionality that adds value without making the project more unstable. For each feature you should:
1. Create a new branch.
2. Write tests for that feature and commit them to the branch. These will fail as you have not yet implemented the feature.
3. Implement that feature.
4. Make any changes to the tests such that they pass with the given implementation. You should not have to do a lot here. If you find that you are, you're not spending enough time on your tests.
5. Create a merge request for the branch.
6. Get someone in your team who did not work on the feature to review the merge request. When reviewing, not only should you ensure the new feature has tests that pass.
7. Fix any issues identified in the review.
8. Merge the merge request into master.
For this project, a feature is typically sized somewhere between a single function, and a whole file of functions (e.g. auth.js ). It is up to you and your team to decide what each feature is.
There is no requirement that each feature be implemented by only one person. In fact, we encourage you to work together closely on features, especially to help those who may still be coming to grips with Javascript.
Please pay careful attention to the following: Your tests, keep in mind the following:
We want to see evidence that you wrote your tests before writing the implementation. As noted above, the commits containing your initial tests should appear before your implementation for every feature branch. If we don't see this evidence, we will assume you did not write your tests first and your mark will be reduced.
You should have black-box tests for all tests required (i.e. testing each function/endpoint). However, you are also welcome to write whitebox unit tests in this iteration if you see that as important.
Merging in merge requests with failing pipelines is very bad practice. Not only does this interfere with your teams ability to work on different features at the same time, and thus slow down development, it is something you will be penalised for in marking.
Similarly, merging in branches with untested features is also very bad practice. We will assume, and you should too, that any code without tests does not work.
Pushing directly to master is not possible for this repo. The only way to get code into master is via a merge request. If you discover you have a bug in master that got through testing, create a bugfix branch and merge that in via a merge request.
As is the case with any system or functionality, there will be some things that you can test extensively, some things that you can test sparsely/fleetingly, and some things that you can't meaningfully test at all. You should aim to test as extensively as you can, and make judgements as to what things fall into what categories.
4.4. Testing the interface
In this iteration, the layer of abstraction has changed to the HTTP level, meaning that you are only required to write integration tests that check the HTTP endpoints, rather than the style of tests you write in iteration 1 where the behaviour of the Javascript functions themselves was tested.
Note your tests do not need to be written in TypeScript.
You will need to check as appropriate for each success/error condition:
The return value of the endpoint;
The behaviour (side effects) of the endpoint; and The status code of the response.
An example of how you would now test the echo interface is:
1 const { echo } = require('./echo');
2 const request = require('sync-request');
3 const OK = 200;
4 describe('HTTP tests using Jest', () => { test('Test successful echo', () => {
5 const res = request( 'GET',
6 `${url}:${port}/echo`, {
7 qs: {
echo: 'Hello',
8 } }
9 );
const bodyObj = JSON.parse(res.body as string); expect(res.statusCode).toBe(OK); expect(bodyObj).toEqual('Hello');
10 });
test('Test invalid echo', () => {
11 const res = request( 'GET',
12 `${url}:${port}/echo`, {
13 qs: {
echo: 'echo',
14 } }
15 );
const bodyObj = JSON.parse(res.body as string); expect(res.statusCode).toBe(OK); expect(bodyObj).toStrictEqual({ error: 'error' });
16 });
17 });
4.5. Continuous Integration
With the introduction of linting to the project with ESlint , you will need to manually edit the gitlab-ci.yml file to lint code within the pipeline. This will require the following:
Addition of as a script under a custom linting variable, apart of . Refer to the lecture slides on continuous integration to find exactly how you should add these.
4.6. Recommended approach
Our recommendation with this iteration is that you start out trying to implement the new functions similarly to how you did in iteration 1.
1. Write HTTP unit tests. These will fail as you have not yet implemented the feature.
Hint: It would be a good idea to consider good test design and the usage of helper functions for your HTTP tests. Is there a way so that you
do not have to completely rewrite your tests from iteration 1?
2. Implement the feature and write the Express route/endpoint for that feature too.
HINT: make sure GET and DELETE requests utilise query parameters, whereas POST and PUT requests utilise JSONified bodies. 3. Run the tests and continue following 4.3. as necessary.
4.7. Storing data
You are required to store data persistently in this iteration.
Modify your backend such that it is able to persist and reload its data store if the process is stopped and started again. The persistence should happen at regular intervals so that in the event of unexpected program termination (e.g. sudden power outage) a minimal amount of data is lost. You may implement this using whatever method of serialisation you prefer (e.g. JSON).
4.8. Versioning
You might notice that some routes are suffixed with V1 and V2 , and that all the new routes are V1 yet all the old routes are V2 . Why is this? When you make changes to specifications, it's usually good practice to give the new function/capability/route a different unique name. This way, if people are using older versions of the specification they can't accidentally call the updated function/route with the wrong data input.
Hint: Yes, your V2 routes can use the functionNameV1 functions you had in iteration 1, regardless of whether you rename the functions or not. The layer of abstraction in iteration 2 has changed from the function interface to the HTTP interface, and therefore your 'functions' from iteration 1 are essentially now just implementation details, and therefore are completely modifiable by you.
4.9. Dryrun
We have provided a very simple dryrun for iteration 2 consisting of 4 tests, one each for your implementation of clear/v1 , auth/register/v2 , channels/create/v2 , and channels/list/v2 . These only check whether your server wrapper functions accept requests correctly, the format of your
return types and simple expected behaviour, so do not rely on these as an indicator for the correctness of your implementation or tests. To run the dryrun, you should be in the root directory of your project (e.g. /project-backend ) and use the command:
1531 dryrun 2