Posts

Showing posts from July, 2024

CST 334 - Module 6

  Week 6,      This week’s module was a continuation of maintaining concurrency between multiple threads. Like last week, we use pthreads to do your average threading activities, only difference is that instead of using pthread condition variables along with the pthread functions related to it, we instead use semaphores. The cool thing about the semaphore is that it can be used as both the lock and the condition due to it using an integer. Additionally, because it uses an integer, we can also use it to control which thread starts first.       The main idea is if you call the function sem_wait(), you decrement the integer, and if you call sem_post(), you increment the integer and signal a waiting thread. The reason for incrementing and decrementing is that if a thread calls sem_wait() and the integer ends up returning negative, the thread that called sem_wait() must now sleep. Once a thread that was allowed to continue before hand (sem_wait() returned 0 or higher when the thread called

CST 334 - Module 5

 Week 5, This module's subject involved a lot to do with using threads and dealing with concurrency. Having processes use threads to complete smaller portions of the process concurrently with other threads is a good way to complete tasks quickly. However, the main problem with using threads is making sure the outcome of the process is determinate, meaning you get the result you want every time. Sometimes without taking measures to keep the processes determinate, you can get an unexpected outcome. An example that the text we are reading brings to light is say you want to use a thread to increment a global variable 1 million times. So, if you want 2 threads to do this, you expect the outcome of the whole process to return 2 million. However, what you ended up getting was 1,984,865, not quite 2 million is it? This happens when two or more threads try and read/write a global variable (critical section) at the same time. Before one thread can change the global variable, another thread r

CST 334 - Module 4

 Week 4 This week of learning about memory virtualization, included material on how paging works with the process and physical memory. Paging splits up address space info fixed sized units called pages (different from segmentation which is variable size and contiguous). The same also happens with physical memory and is known as a page frame. To keep track of these pages of addresses, we use a data structure called a page table which is used to map virtual addresses to physical addresses by storing mappings of virtual page numbers (VPN) to physical frame numbers (PFN) along with bits indicating whether it is valid and other protection bits (readable, writeable, executable). The PFN in combination with the valid and protection bits is known as a page table entry (PTE).  Leaving out other bits of information to keep this brief, given a virtual address, we can find the VPN from the virtual address (VPN + VA-OFFSET = Virtual Address). Using the page table, use the VPN to find the PTE, check

CST 334 - Module 3

 Week 3 There was a lot of things to learn this week. A large portion being about memory virtualization, how memory is managed (allocated/deallocated), and the algorithms used to allocate memory. To keep this short, I will talk about a few of the things I learned. When a user process wants to run, the CPU generates a logical address (virtual address). Here, the logical address gets mapped to distinct regions of physical memory, which prevents each program from reading/writing each other's data. The address space contains the instructions, the heap to dynamically allocate memory, and a stack to return address/values and contains local variables and arguments. To allocate the memory, there are three common allocation algorithms that are mentioned in the readings. First fit finds the first chunk of free space that the program will fit in, which is fast but leaves smaller chunks of free space if the chosen chunk is bigger than the program. Best fit, which finds a free chunk that best f