This is the raytracer assignment. Ray-object intersection, shading and lighting computations, and file I/O are the standard components. Further is the requirement of a unique scene, and one additional objective. I chose to do anti-aliasing, in the form of super sampling, and reflection.
We were given some very basic code which drew background image, which appeared to look like some very large, slightly shaded pixels. My first thing to tackle was to change the background. Discovering we'd been provided with an image processing interface, I proceeded to work out how to load in pictures as the default background. Proof of concept occurred when I got a picture of Waterloo city to appear. (Basic file i/o)
The next step in the assignment was getting sphere-ray intersection to work. For a solid week I had little black dots, and no amount of debugging or decoding would get rid of the friggin ... I mean wonderfully representative black dots. One of the reasons I had black dots for so long was incorrectly reading the specifications, and not recognizing how incredibly different hierarchical and non-hierarchical scenes were being stored in memory. That didn't solve all the black dot problems, but it helped. I think another black dot problem was possibly that I wasn't normalizing certain vectors. In anycase, a week's worth of black dots was frustrating. Once that was accomplished, ray-plane intersection (eg. cube, and subsequent polygonal mesh) intersection was straightforward.
Somewhere along the lines cross hairs began appearing, and just as oddly as they arrived, they also disappeared. (There may have been delta error checking involved to get rid of them...)
Shading brought along some funky coloured images. I continued to incorrectly compute the pixel shading colour - though normal retrieval, and colour collection was extremely straightforward. Shadow computations was also easily accomplished.
Once non-hierarchical scenes worked, it was time to move on to hierarchical ones (eg. ones where objects are positioned relative to other objects). Initially the normals were facing the wrong direction. Once shadows "worked", I discovered they were perpetually 2 - 3 pixels off from where they ought to have been. Another 5 - 6 hours later, and after greatfully line by line debugging the code with a TA who was floating around the lab on a weekend no less, I discovered for some odd reason I'd been adding some fixed value to all of my intersection calculations. No idea what I was thinking of when I put that line of code in.
Bounding boxes, to reduce computations on render time, were constructed in the form of a bounding cube the of length the size of the longest X, Y, or Z length of the mesh. Basic tracking of distances during mesh loading. Bounding boxes are meant to drastically speed up rendering times, however even with it, my rendering was awfully slow. One of the students in the lab was able to get macho cows (a standard scene with multiple polygonal meshes) to render in less than 2 minutes. For my final rendition, I believe it took some 15+ minutes.
With all objectives completed, minus the required extra objective, and a personalized scene, I moved onto creating the required objective. I wanted to do reflection, refraction, texture mapping and bump mapping, however after many hours of not getting reflection to work, and time unfortunately running out for the deadline, I quickly (first try) got super sampling (a form of anti-aliasing; blurring adjacent pixels so images look smoother) to work. When a neighbouring coder got reflection to work, I grabbed/borrowed his attention and tried to parse through the reflection errors I was having. Turns out, once again, I was collecting colours (multiply vs. add vs. ...???) incorrectly. Once that was resolved, there was too little time to create let alone render the scenes I'd wanted to, so I constructed a scene with the camera looking at a mirror, and a snowman standing behind the camera.