COMP3811 Coursework 2
1.1 Matrix/vector functions
4 marks
Your first task is to implement the necessary matrix and vector functions in vmlib. Minimally, this includes the following functions in vmlib/mat44.hpp:
• Mat44f operator*( Mat44f const&, Mat44f const& )
• Vec4f operator*( Mat44f const&, Vec4f const& )
• Mat44f make_rotation_{x,y,z}( float ) (three functions)
• Mat44f make_translation( Vec3f )
• Mat44f make_perspective_projection( float, float, float, float )
These functions are quite fundamental and needed for the rest of this coursework. To help you check for the correctness, the handed-out code includes the vmlib-test subproject. It uses the Catch2 testing library. A sample test case for the standard projection matrix is included as an example.
You should add tests for each of the above functions. Deriving the known-good values by hand or using a (well-known) third-party library are both acceptable. Make sure your tests are meaningful.
Note: you are required to use the vmlib types in your solution – i.e., you may not use a third party math library!
In your report: Briefly outline the tests that you have added. Explain why those tests are meaningful. (No
screenshots are required.)
1.2 3D renderer basics
8 marks
Next, set up a 3D rendering application using modern shader-based OpenGL in the main subproject. You may start with the skeleton provided in main/main.cpp. Refer to the OpenGL exercises from the module as necessary.
The base code includes a Wavefront OBJ file, assets/parlahti.obj and assets/parlahti.mtl. The mesh shows a small part of Finland, near the Parlahti region with the islands Heina ̈saari, Sommaro ̈n and La ̊ngo ̈n (saari is island in Finnish; o ̈ is island in Swedish). The region is 6km by 6km and derived from a digital elevation model with 2m resolution. You can download other parts of Finland from their homepage. The datasets are subject to the
Creative Commons Attribution 4.0 license.
You should set up a program that can load this Wavefront OBJ file and display it.
Use a perspective projection for rendering. You must use the make_perspective_projection function from vmlib to create the projection matrix. Make sure the window is resizable.
Implement a camera with which you can explore the scene using the mouse and keyboard. Aim for a first- person style 3D camera (WSAD+EQ to control position, mouse to look around, shift to speed up, and ctrl to slow down). Implement this using the callback-style input from GLFW – do not use polling for mouse and keyboard input (so, no glfwGetKey and similar).
Use the simplified directional light model from Exercise G.5, i.e., with only a ambient and diffuse component. For now, use the light direction (0, 1, −1) (but remember to normalize it!).
Refer to Figure 1 for some sample screenshots of what this may look like.
In your report: Report the values for GL_RENDERER, GL_VENDOR and GL_VERSION for your test computer(s). Include several (3+) screenshots of the scene. They must be significantly different (=different views) from the ones shown in Figure 1.
1.3 Texturing
4 marks
In addition to the elevation data, Maanmittauslaitos also provides orthophotos. These are photos taken with a top down view (for example, from an airplane). We can use this orthophoto as a texture for the terrain mesh.
The Wavefront OBJ includes all the necessary data. Update your renderer to draw the mesh with a texture. Combine the texture with the simple lighting from Section 1.2.
COMP3811 - Coursework 2 3
Figure 1: Screenshots showing the Parlahti mesh, with a simple “n dot l” lighting model. The simple lighting model helps us see the geometry clearly.
Figure 2: Screenshots showing the Parlahti model with the orthophoto-based texture. Note that it does not include any lighting. (You should however combine the texture with lighting in your implementation!)
See Figure 2 for a textured rendering (no lighting).
In your report: Include several (3+) screenshots with texturing. They must be significantly different (=different
views) from the ones shown in Figure 2.
1.4 Simple Instancing
4 marks
“Why build one when you can have two at twice the price?”, Contact (1997)
The exercise includes a second Wavefront OBJ file, assets/launchpad.obj and assets/launchpad.mtl (Figure 3). This models the launchpad that AvaruusY is planning to use for its launches.
Load this file (once). Find two spots for the launch pad. They must be in the sea and in contact with the water (but not fully submerged). They cannot be at the origin. They should not be immediately next to each other. Render the object twice (without making a copy of the data), once at each location. The model does not have a texture, but uses per-material colors. You will need to handle this somehow, for example with two different shader programs.
In your report: Document the coordinates at which you placed the launch pads. Include screenshots of the placement.
Figure3: Launchpadmodel(launchpad.obj).
1.5 Custom model
4 marks
Create a 3D model of a space vehicle programmatically. Combine basic shapes (boxes, cylinders, ...) into a complex object. You must use at least seven (7) basic shapes. These must be connected to each other (e.g., they can not float freely) and all of the parts must be visible. You should however use all transformations (translation, rotation, scaling) when placing objects. At least one object must be placed relative to another (e.g., you place one thing on top of another).
Make sure you generate the appropriate normals for the object, such that it is lit correctly. You do not have to texture the object. You are free to use different colors for different parts if you wish.
The space vehicle must be placed on one of the launch pads (Section 1.4).
Note: You have some freedom in designing your “space vehicle”. It should be recognizable as a vehicle, but it does not have to be a rocket. An alien UFO would work as well. Or a submarine ... after all, that’s a bit like a spaceship, except that it goes the other way1.
In your report: Document which of the launch pads you placed your space vehicle on. Include screenshots (2+) of your vehicle.
1.6 Local light sources
3 marks
Implement the full Blinn-Phong shading model for point lights, including the standard 1/r2 distance attenu- ation. Add three point lights, each with a different color, to your space vehicle (Section 1.5). You may place them slightly away from the geometry to avoid problems with the division.
In your report: Include a screenshot of lit space vehicle and launch pad.
1.7 Animation
3 marks
Implement an animation where your space vehicle flies away. The animation should start when the user presses the F key. Pressing R should reset the space vehicle to its original position and disable the animation.
For full marks, you should consider a curved path for the vehicle, where it slowly accelerates from a standstill. The vehicle should rotate such that it always faces into the direction of movement. The lights from Section 1.6 should follow the space vehicle.
In your report: Include a screenshot of the vehicle mid-flight.
1.8 Tracking cameras
3 marks
Implement a feature where the user can switch to cameras that track the space vehicle in its flight (Section 1.7). Implement two cameras modes:
• A camera that looks at the space vehicle from a fixed distance and follows it in its flight.
• A camera that is fixed on the ground and always looks towards the space vehicle, even as it flies away.
The user should be able to cycle through the different camera modes with the C key. So, pressing the key should first switch from the freely controllable default camera to the fixed-distance camera, next to the ground camera and then back.
In your report: Include a screenshot of each camera mode, with the vehicle in mid-flight.
1.9 Split screen
4 marks
Implement a split screen rendering mode that shows two different views at once. Let the user toggle split screen mode by pressing the V key. Pressing C should select the camera mode for one of the views (Section 1.8). The camera mode for the other one should cycle by pressing Shift-C. You can pick how you split the screen, but a simple horizontal 50%-50% split is fine.
There are multiple possible strategies for implementing a split screen mode. You are free to pick one yourself. However, it should not be overly wasteful (so don’t render parts that are never shown to the user). The split screen mode should adapt to the window when resized.
In your report: Describe how you have implemented the split screen feature. Include screenshots that show the split screen mode in action.
1.10 Particles
4 marks
Implement a particle system that simulates exhaust from your space vehicle as it flies away (Section 1.7). For full marks, consider the following features:
-
Render particles as textured point sprites or as textured camera-facing quads.
-
Make sure the particles render correctly from all views. Designing the system around additive blending
-
Particles should have fixed lifetime after which they disappear.
Take particular care with the implementation. For example, avoid dynamic allocations at runtime during the simulation. A CPU-based implementation is perfectly adequate.
In your report: Describe how you have implemented the particle system. Document any assumptions that you have made and mention limitations. Discuss the efficiency/performance of the system. Include a screenshot of the effect.
1.11 Simple 2D UI elements
4 marks
Implement the following UI elements:
-
In the top left corner, display the current altitude of the space vehicle in text.
-
In the bottom center, implement two buttons. One that launches the space vehicle when clicked and one
that resets the vehicle to its initial position (see Section 1.7).
You should implement this yourself, e.g., without a third party UI library. You may however use Fontstash for rendering text (use the included assets/DroidSansMonoDotted.ttf font). You can also opt for ren- dering text using a font atlas texture if you prefer.
In your report: Describe your implementation. What are the steps to adding another UI element? Include screenshots of the UI, including the different button states.
1.12 Measuring performance
5 marks
Measure the performance of your renderer. This task primarily focuses on the GPU rendering performance (recall that the GPU will run asynchronously from the CPU). You must therefore use GL_TIMESTAMP queries with the glQueryCounter() function. Study the documentation for the above function. Refer to OpenGL wiki on query objects for additional information.
Use the queries to measure rendering performance. Each frame, you should measure:
• the full rendering time (but exclude e.g., glfwSwapBuffers() and other non-rendering code).
• the time needed to render the parts from Sections 1.2, 1.4 and 1.5 individually, so as three different times.
Additionally measure frame-to-frame times and the time it takes to submit the rendering commands with OpenGL using a CPU timer such as std::chrono::high_resolution_clock.
Perform the measurements over multiple frames. Make sure you run in release mode and that you have disabled any OpenGL debugging and removed any potentially blocking calls (e.g., glGetError()). Ideally, try to run this on one of the newer machines in 2.05, e.g., with the NVIDIA RTX GPUs.
In your report: Describe your implementation. Document your results (table or plot) and discuss these. Com- pare the different timings. Are they what you would expect? Are the timings reproducible in different runs? Do they vary as you move around? Explain your observations. (No screenshots required.)
2 Implementation Requirements
For full marks, you are expected to following the requirements/specification detailed in this document. Fur- ther, you are expected apply to good (graphics) engineering practices, including:
-
Manage resources! In debug mode, you should clean up all resources and allocations. In release mode, you may choose to “leak” some resources on exit as an optimization (which the OS will free up once the process terminates), but in order to aid debugging, you must not do so in debug builds. Furthermore, you should never leak run-time allocations that cause resource usage to continually increase.
-
Do not unnecessarily duplicate data. Typically, any large resource should be allocated at most once and never be duplicated. If you duplicate resources, you should make this an informed choice and give a reason for doing so (e.g., double-buffering due to concurrent accesses, or in order to enable batching for more efficient draw calls).
Do not do unnecessary work in the pipeline: Don’t do computations in the fragment shader if they could be done in the vertex shader. Don’t do computations in the vertex shader if they could be done once on the CPU. Don’t repeat computations each frame if they can (reasonably) be done once on startup. Etc. (This is sometimes a trade-off between memory and compute. If in doubt, motivate your choice.)
-
Use appropriate data structures. For example, by default you should prefer an std::vector over a linked list. Don’t use a shared_ptr when an unique_ptr would do. ...
-
Don’t reinvent the wheel unnecessarily. For example, do use the C++ standard library’s containers, rather than implementing your own. (The exercise requires you to implement your own vector/matrix functions. This can be considered a special exception to this guideline.)
Wrapping up
Please double-check the requirements in and ensure that your submission conforms to these. In particular, pay attention to file types (archive format and report format) and ensure that you have not included any unnecessary files in the submission.
Make sure you have included the table with individual contributions in your report (see Section 1).