Posts

Showing posts from July, 2025

CST 438 - Module 1

 Week 1, This is the first week of the course CST 438 Software Engineering. The contents of this first module are to first teach of the main differences between software engineering and programming. Both involve programming, but software engineering also takes into account the maintainability and sustainability of code over time, which can be influenced by many factors. There are a few terms brought up by the textbook "Software Engineering at Google by Winters, Mantric and Wright" which can be helpful to remember such as Hyrum's Law, shifting left, the Beyonce Rule, etc. Additionally, we were introduced to the REST API, which stands for Representational State Transfer, and allows us to exchange and manipulate data over the internet using the common HTTP methods such as GET, POST, DELETE, and PUT. We did something similar in the Internet Programming course, except this time, we are keeping in mind the practices that goes into software engineering. Initially, for a course i...

CST 325 - Module 7/8

Image
 Weeks 7 & 8, This is a combined journal entry because we were given more time to implement our final projects along with learning how to add shadows (outside of raytracing).  It was simpler to implement in the ray-tracer assignment because we have access to all the objects during render time. For rasterization, we have two options, make a volume based on the object in front of the light, casted into the shadowed object, or use an image texture (shadow mapping). Focusing on shadow mapping, it is interesting to see we consider a light source as a camera and use it to create a depth texture. Then render from our main camera and compare the object's distance from the light using the depth texture. Although it is still confusing to keep in mind the different spaces (world, view, clip, and texture space), the core idea of an object's distance matching up with the depth texture’s value on that pixel helps grasp the idea. Otherwise, the final was a fun task because it allows us t...

CST 325 - Module 6

Image
 Week 6, This week's module is focused on how we would illuminate an environment. While ray tracing would be straightforward (yet expensive), the main problem with it is that we are using rasterization, where we work with one object at a time. Ray tracing would have access to all the objects of the scene. So, we consider specific equations we can combine to get a close result to the rendering equation. In our case we use Phong shading, using ambient, diffuse, and specular lighting. Ambient is used to shade everything to a bare minimum and looks flat. Next, we combine that diffuse lighting, which is essentially light hitting a surface, then being scattered equally in all directions in the form of an upper hemisphere. Then we combine the result with specular, which is similar to diffuse, however, instead of scattering equally into all directions, the light reflects into a more concise cone in another interesting direction. We can tighten the specular as we see fit. Additionally, we c...

CST 325 - Module 5

Image
 Week 5, This week's module has much to do with  using textures, frame buffers, blending, and transparency. Starting off with texturing, we define a texture coordinate, commonly from 0 to 1, where the fragment location is compared to the texture we set up, and we set the color. Oftentimes, the texture size doesn’t match up with the pixel space on the screen, requiring us to implement magnification/minification. Textures can also be transformed, and when doing so, we define how it is wrapped. Commonly, the texture would be repeated, clamped to the edge, or even repeated. However, this alone often doesn’t look good when viewing an object with a texture at an angle. This is where we use point sampling, bilinear filtering, mip-mapping, anisotropy, or different combinations. A frame buffer contains information on a per pixel level. However, we can utilize different buffers with it. For example, double buffering allows us to use a back buffer, which is what does the drawing, and sto...

CST 325 - Module 4

Image
 Week 4, This was a packed week. Mostly involving the graphics pipeline. In short, there are multiple stages to drawing an image on the screen. The output of one stage is the input of the next stage. This involves both the CPU and GPU. The application loads and figures out what to do and how to draw it, then the GPU, which is made for this purpose, applies the expensive calculations and draws the pixels and interpolates. For this to be possible we combine multiple matrices (object, world, view, etc.) into one matrix for the screen space. Some key concepts explored are clipping, where if parts or all of an object are outside the view volume are ignored. Culling, where we ignore rendering parts of an object not visible to the camera (a good example being getting too close to an object and seeing through it). Rasterization is where the vectors and objects get translated to pixels, where we can also use the z buffer to record depth and specifically draw the pixel of the object closest ...

CST 325 - Module 3

 Week 3, This week serves as another refresher, only this time it is centered around matrices and how to transform them. In short, we are able to use an identity matrix (when multiplied, no changes occur) along with translation, rotations and scale matrices. Also worth noting, is we can borrow another "dimension of the matrix making them a 4x4 matrix which helps us with performing matrix translations. One major key point with the matrices is performing the matrix multiplication in function notation "right to left", which depending on the results you want in graphics, can sometimes be a bit confusing. Another key point in the lectures is the gimbal lock, where multiple rotations can result in two axes rotating in the same direction resulting in a loss of degree of rotation. There is much more to consider when using matrices, but I'll cut it short here.

CST 325 - Module 2

Image
 Week 2, The core concept of the module is rather fascinating. In short, we learned more about ray tracing, where we shoot a ray into what would be considered a pixel in an imaginary plane, which looks for something to “hit”, using that we draw the pixel based on what we hit. If there is a light source, we compare the point to the light source, and the hit objects normal to shade the pixel. However, if we want more detail, such as reflections, we can create another ray from the hit point and repeat the process recursively, then take the combinations of those colors and apply it to the pixel. This is also a similar process to casting shadows, except instead of looking for the color of the hit object, we are just interested if there is an object between the point and the light source. I also learned more about simple anti-aliasing which is just shooting out more rays per pixel in a certain pattern, then averaging the returning colors between each ray, then drawing the pixel. This was...

CST 325 - Module 1

 Week 1, This is the first week of the course CST 325 Graphics Programming. The first week serves as a refresher on some important math concepts and how we can use them to generate images. Some core points would be the concept of vectors and what they convey depends on the context of what we are trying to achieve. In the context of a point using the origin to the tip of the vector, or just a vector, giving us direction and magnitude, without caring about where it is. Additionally, we were re-introduced to some vector arithmetic and the dot product, and how we can use it for many things, such as projection, finding an angle, etc. The most interesting to me is the ray sphere intersection, where the goal is to find where the ray may intersect the sphere, resulting in a "hit", and if there is a hit, we do something with it. There can be many applications to this, such as rendering the sphere from "shooting" a ray throughout the screen. Or maybe a hit scan weapon in a ga...