I find computer graphics interesting for similar reasons to the mandelbrot set, it is very intensive on the computer, there is a nice visual aspect, and there's plenty of good resources to learn how to do it myself.
This project has given me a better idea of using the heap in Rust, and through what I learned by working on code at Siemens, I think I have structured the code to more effectively allow for future features.
All of the source code is available on a github repo, and compiled binaries for various platforms are on the downloads page
                    Scene defined in Blender and exported with a python script. At the time, quads weren't implemented and there were some issues with defining triangles, leading to the strange texture of the doughnuts
                    This is the demo scene I have used in development, there is a blue diffuse triangle, grey specular quad, multiple specular spheres, and a specular plane as a floor. All of the specular surfaces allows for reflection between objects (depending on the maximum number of bounces)
After attempting to write my own matrix library for use in my aircraft simulator (post to come) and seeing it was more effort than it was worth, I decided to save myself the headache and use nalgebra. As for my other graphical projects, I used egui as a display to draw to.
Ideally, I would have had a strong enough grasp of geometry to derive the equations for a ray-sphere intersection, but I wasn't quite able to do it. Thankfully, there are plenty of guides online, and I used this one from scratchapixel so that I had something to implement. Going through the derivation on paper a few times and drawing some diagrams of my own gave me the understanding to implement it in code. From here, I used the normal of the sphere where it was hit as a reflected ray to be lazy, which let me start working on reflections between spheres. egui has good support for keyboard inputs, so I used that to allow the user to walk around the scene, and also rotate the camera. The next feature, planes, felt similar to a sphere because I wasn't sure how to define a plane from three vectors. Maybe I should have taken A-Level further math or graduate at a time other than during the pandemic, but alas. Similar to the plane, I added a triangle. This was done by finding the intersection of the ray with the plane, and then finding the intersection point in terms of the vectors \(a, b, c\) that define the plane, and making sure that the intersection point \((b_i, c_i)\) satisfies \(0\le b_i < 1\), \(0\le c_i < 1\) and \(c_i + b_i \lt 1\). This allowed me to greatly expand what I can represent, to essentially any 3d model (after a bit of preprocessing). To show to myself that this was the case, I added in a simple csv reader, which takes in a type of object and some points, and converts it into data that my program can use. After a quick bit of work in blender's python editor, I could export essentially any 3d model in the world and have it displayed as I like.
Developing this software on one of apple's M2 CPUs, performance was always good enough. However, after trying some of the meshes that blender has built in in my renderer, the increased number of objects quickly became an issue. Because every ray is checking for intersections with every object, the complexity scales with at least the square of the number of objects. The jump from around 5 objects to closer to 50 was VERY noticeable, and I have a new-found appreciation for what GPU manufacturers are managing with real-time raytracing. The code is embarrasingly parallel, with the same code running with slightly different inputs. This meant it was a perfect candidate for rayon, which gave an almost unbelievable performance increase. This is still an area that needs addressing though. After this, I noticed that reflections looked strange, almost as if they were a quick hacky implementation. After remembering that they were an unphysical implementation, I did some quick adjustments with the help of my brother, and became a lot more happy with the results.
Finally, I have done some changes to add diffuse reflection. My implementation has a number of rays spawned at the new intersection point, with the direction of them distributed semi-randomly. In comparison to specular reflections, which always spawn one ray in a known direction, this makes diffuse reflections significantly more expensive to compute. After each ray has calculated the colour it should be, the colours are averaged out.
More recently, I have decided that the current csv format is boring and doesn't have enough flexibility. For this reason, I wrote some code to parse some JSON to allow for more readable, flexible formats. There is an example of this format in github. I was unsure how to properly implement the serde_json Serialize and Deserialize traits for my variety of types in a single array, so I wrote code to do it myself. This ended up being a really enjoyable process, seeing how the structure of json translates to code that is quite recursive and only relies on a few functions was very fun. The csv format was very hard to use and even worse to extend, hopefully this json format should allow for future extensions fairly easily.
This is by no means a finished project, more a paused one. While it is in a place that it can do a lot of what I want, there are some clear limitations that I know I can address. Since the last time I worked on this, I've moved between cities, spent time at home with family and at time of writing I'm about to go on holiday, and then finish my degree. Hopefully i'll find time to implement these features.
At the moment, the material of the surface is the same for each entire object. While this is good enough for now, I would like to be able to read in image data and use that as the colour for a surface. To do this, there is the image crate to handle the loading of the data. The bigger issue is that in my code, there is no way to pass the position that is hit on an object. While this data does exist for triangles and quads, this becomes a lot more difficult on something like an infinite plane, or a sphere. For a full implementation of this, I would also like to be able to scale and rotate the image, but I won't get ahead of myself. Representing all of this in JSON could lead to some interesting files also.
The current implementation has diffuse surfaces have an unrealistic amount of brightness and vibrancy, because of unrealistic code that I wrote. I think to fix it, I'm going to have to have a colour and a brightness value, rather than just a colour. Hopefully this shouldn't be too hard.
Both .stl and .obj file formats are documented on wikipedia, and both appear to be within my abilities to implement. A .stl seems like more of a simple undertaking, and would already greatly simplify importing geometry from elsewhere, but the .obj file format would probably better test my skills. Neither have texturing information associated, so this might be something that I need to figure out a solution for
As I have mentioned, running my software can be very slow, but the software can also very easily be parallelised. To address the runtime, more parallelisation could be used. I imagine this taking the form of multiple computers running the code, rendering different segments of the image. This could be challenging, as it would involve sharing the geometry data across computers, either manually or automatically, and some communication to deal with what set of rays need rendering, and what the resulting image data is. I have plenty of computers, so this would be an interesting way to put them to work, albeit with limited use and a lot of work.