Project 1: Ray Tracer
COMP30019 - Graphics and Interaction Semester 2, 2022
This project is individual work (30 marks). Due: 4th September 2022, 23:59 AEST
Assignment Brief
You are tasked with building a ray tracer. Your ray tracer will output a single static PNG image, based on an input ‘scene’ file and command line arguments. We have provided you with a template C# implementation that you will need to complete. We are not using the Unity engine in this project, however, you may find that some of the theory in this assignment will be transferable to Unity development (particularly the maths). The assignment is broken down into numerous stages and steps. Our expectations for each stage and the respectively allocated marks are outlined in detail below. You should aim to complete each step in sequence since this will make the process less overwhelming.
There are various approaches to modelling how light interacts with surfaces in a scene. Almost always, the choice of approach comes down to a trade-off between computational complexity and realism. A ray tracing based approach can produce very realistic images, however this comes with a significant computational cost that generally makes it unsuit- able for real-time rendering. Even if there are no real-time rendering requirements, we still have to approximate and optimise the ray tracing process, since simulating all rays in a scene is computationally intractable.
Template code
You will be given a GitHub repository to work on your project that is already pre- initialised with the template code. This is a private repository, so you may commit/push to it without worry of other students having access to your work. You are expected to use GitHub from the start through to the end of the project, and should commit and push frequently. We won’t accept submissions not hosted in your private repository.
A link to accept the assignment and automatically create your template repository is provided on the Canvas project page (where you found this specification document). Note that you may submit the assignment as many times as you wish – only the latest will be marked.
Stage 1 - Basic ray tracer (9 marks)
You will first implement the basic functionality of a ray tracer. At its core, ray tracing is an application of geometry and basic linear algebra (vector maths will become your bread and butter!). For example, a ray of light can be modelled by two three-dimensional vectors: a starting position and direction. Surfaces, light sources, and other entities in the environment can also be defined using vectors. Using geometry, it is possible to calculate how a ray reflects off a surface, or perhaps even refracts through it. Ultimately we are interested in simulating rays of light propagating throughout the environment, interacting with various surfaces, before finally reaching the viewer as pixels on their screen. If we are clever in utilising ‘real-life’ physical models for these interactions, we can generate incredibly realistic scenes.
Stage 1.1 - Familiarise yourself with the template
Before writing any code, try to understand how the template provided to you works. We have already taken care of quite a few details for you, such as input and output handling. A sample input scene is provided to you in a text file (tests/sample scene 1.txt), and a parser for this file has been written so you can access objects and resources directly within the Scene class (src/scene/Scene.cs). The core ray tracing logic (which you will write) should be implemented inside the Render() method in Scene.cs. This method takes an Image object for which you can set the individual colour of each pixel, as well as derive properties such as its width and height. When the program is run, this image will automatically be outputted as a PNG image file.
Stage 1.2 - Implement vector mathematics
We have provided you with a C# struct template for representing a three-dimensional vector (src/math/Vector3.cs). Write code to complete the missing operations which are currently empty methods. Note that for convenience we have overloaded operators2 such as +, *, /. This is a handy language feature that allows us to perform vector arithmetic concisely:
Vector3 a = new Vector3(0, 1, 0);
Vector3 b = new Vector3(1, 1, 0);
Vector3 c = a + b; // We overloaded ‘+’ so c = (1, 2, 0)
Stage 1.3 - Fire a ray for each pixel
We have already provided you with a ‘ray’ structure (src/math/Ray.cs). Notice that it is simply a position (origin) and a direction, both represented as vectors. While it is possible to trace rays forwards from light sources in the scene, it is far more efficient to trace rays backwards from
In this project a scene can contain three types of primitive entities – planes, trian- gles and spheres. If you haven’t already, open the template classes provided in the src/primitives folder:
- Plane.cs – Represented by a point (center), and a vector representing the direc- tion it faces (normal - i.e., perpendicular to the actual surface of the plane). Note this defines an ‘infinite’ plane.
- Triangle.cs – Represented by three points (v0, v1, v2). A clockwise winding order defines the front face of the triangle.
- Sphere.cs – Represented by a point (center) and a radius.
All of these classes implement the interface SceneEntity. This means they all contain a method called Intersect(), which takes a Ray as its input and returns a RayHit as its output. The returned RayHit structure contains important information used during ray tracing: the incident ray direction, the position of the hit, and the normal of the surface at that position. It is also possible that there is no hit at all, in which case, null should be returned instead of a RayHit instance. Your job is to implement the Intersect() method for all three primitive entities. Again, you may find it helpful to research common mathematical approaches to these problems if you are stuck.
Stage 1.4 - Calculate ray–entity intersections
foreach (SceneEntity entity in this.entities) {
RayHit hit = entity.Intersect(ray); if (hit != null)
{
// We got a hit with this entity!
// The colour of the entity is entity.Material.Color }
}
Note that entity is an interface, so we don’t know exactly which type of primitive it is (plane, triangle or sphere), but that does not matter since we are only interested in the intersection itself.
We have provided you with sample outputs in the images folder, so you have an indication of how your output should look for stages 1 and 2 respectively.
Stage 2 - Lighting and materials (9 marks)
In this stage you will extend the ray tracer to handle lighting, and model different types of materials. Some materials are more trivial to compute than others, and this complexity ultimately boils down to how light rays interact with them.
Note that every entity is assigned a material. The material contains properties that allow us to calculate how light interacts with the entity – for example, its colour, whether it is opaque, reflective, transparent, etc. Open src/core/Material.cs to see our repre- sentation of a material in this project. Note that you already used the Color property in the previous stage.
Stage 2.1 - Diffuse materials
We will first consider the case where a ray coincides with a diffuse surface which is directly illuminated by a light source. When light hits an ‘ideal’ diffuse material, it scat- ters uniformly in all directions. This means it is viewer-independent, and the intensity only varies depending on the angle of incidence between the light source and the surface. Diffuse lighting is so trivial to compute that it is regularly used in real-time rendering techniques (not just ray tracing).
Stage 2.2 - Shadow rays
Consider the fact that light rays may be blocked by objects in the scene. This should lead to visible shadows. Extend your implementation to check whether a hit point is in fact in a shadow. You can do this by firing another ray towards the light source from that point, and checking if it hits a (closer) surface along the way. If there is a hit, then that light source should not contribute to illumination at that point.
Be careful when firing a ray away from the surface of an object. Numerical error could lead to a ‘premature’ hit with that same object! One solution is to offset the origin of the ray slightly away from the surface.
Stage 2.3 - Reflective materials
This is where ray tracing really starts to shine – no pun intended! Extend the ray tracer to handle materials with the Reflective type. When a ray hits a reflective material, another ray should be recursively traced to determine the colour at that point. To do this you need to calculate a reflection vector as a function of the hit point’s surface normal and the incident ray direction. This should be pure reflection – the colour of the material plays no role in the calculations. Note that if there are a lot of reflective surfaces in a scene, computational costs can blow out significantly, so you may wish to place a hard limit on the depth of recursion (i.e., how many new ‘reflection’ rays you are willing to ‘fire’).
Stage 2.4 - Refractive materials
Some materials are transparent, and allow light to transmit through them. Glass is an example of such a material. Unfortunately, simulating this effect in a realistic manner is not as simple as allowing an incident ray to pass directly through the object. Indeed, you may have observed that light can ‘bend’ through transparent mediums (take a look at any curved glass object). This phenomenon is known as refraction. Extend the ray tracer once again to handle materials with the Refractive type. In a similar way to how you handled reflection, upon a ray hitting a surface, you should recursively trace a ray through the object according to physical laws of refraction. The colour of the material should not play any role in the calculations at this stage. Note that materials have an additional RefractiveIndex property, which will come in handy here.
Stage 2.5 - The Fresnel effect
In the real world, refraction does not really occur in total isolation from reflection. When light hits a refractive surface, some proportion of it is reflected, while another proportion is refracted (these proportions sum to 1 since energy is conserved). This proportion is not uniform for all rays which hit the surface. As a ray’s angle of incidence decreases, there is greater reflection versus refraction. If you look at a sheet of glass from front on, and you will see that most of the light is refracted (transmits through). However, if you look at it almost side-on, it looks a lot more reflective!
Stage 2.6 - Anti-aliasing
You may have noticed that the images being produced so far contain somewhat jagged edges. This is because details in the scene can differ at the sub-pixel level when they are projected onto the final image. This is a common problem in computer graphics generally, and is called aliasing. Aliasing is usually quite visible where there are curved edges, or edges that are not aligned horizontally or vertically with the screen (think about why). We can use various techniques to mitigate this problem, and this process is called anti-aliasing.
dotnet run -- -f tests/sample_scene.txt -o output.png -x 2
The argument -x specifies this multiplier, which in this example is 2. This means you should fire twice as many rays both horizontally and vertically (4x rays per pixel). If the multiplier is 3, then you should fire three times as many rays in both directions (9x rays per pixel). And so on. Note that we have already parsed this command-line argument for you! It is accessible within the Scene class as options.AAMultiplier, so you don’t need to worry about how to read it into your program.
Stage 3 - Advanced add-ons (9 marks)
In this stage you are given the opportunity to implement some advanced add-on effects of your choosing. Some are more trivial to implement than others, and the allocated marks reflect their approximate difficulty and/or time commitment. In completing these questions to a high standard, we expect that you research various approaches, and make informed decisions to maximise the outcomes of the intended effects. You should write some detailed comments in your README.md file which describe the approach you have taken. It is not possible to receive
Stage 3, Option A - Emissive materials (+6)
Up until this point we considered lights to be infinitely small points. This is an approx- imation, and not how lighting typically works in the real world. Consider a fluorescent bulb which is a long cylindrical shape. It would be inappropriate to model this using a singular point. Even a standard light globe, which comes close to being modelled by a ‘point’, is still a physical object with a surface area that emits light. We call such surfaces emissive.
Stage 3, Option B - Ambient lighting/occlusion (+6)
The current ray tracer only simulates a very small subset of rays in a scene. Consider how the illumination of a point on a diffuse object is currently calculated. We test if there is a direct ray originating from a light source. If there isn’t, or there is an object in-between, then the point isn’t illuminated at all. However, in reality, there may still be some illumination that comes from indirect rays of light (e.g. bouncing off other surfaces in the scene). This is called ambient lighting. When ambient lighting is computed, we consequently compute ambient occlusion in the scene, since indirect light will not illuminate all surfaces equally. For example, surfaces inside crevices and cracks tend to be a lot darker since they are less ‘exposed’ to the other
Stage 3, Option C - OBJ models (+6)
Extend the ray tracer to read simple 3D models in the form of .obj files. Research how the .obj file format is structured. We only expect that you handle files with vertex, normal and face definitions (v, vn, f), and faces may be assumed to be triangles (exactly three vertices). Be sure to consider how more complex models could impact the performance of your ray tracer, and optimise relevant data structures accordingly. If you do implement optimisations, ensure these are discussed in your README.md.