Looking for 2025W Operating Systems (CS-3520-01) test answers and solutions? Browse our comprehensive collection of verified answers for 2025W Operating Systems (CS-3520-01) at moodle31.upei.ca.
Get instant access to accurate answers and detailed explanations for your course questions. Our community-driven platform helps students succeed!
What are the potential problems with unlimited direct execution of processes?
Given a base address of 0x4000 and bounds of 0x2000, what would be the physical address for virtual address 0x1500?
I made a question that was too long to fit on one page, so please see the attached double-gobbler question for the question and the answer options.
After working through multiple incorrect solutions to the single-buffer-slot producer/consumer problem, we have finally settled on the following code with some key new insights including: using condition variables to signal when the buffer is filled and when it has been emptied, and using a while loop instead of an if statement when checking if there is something in the buffer (i.e., the value of count).
int loops;
cond_t cond;
mutex_t mutex;
void *producer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
Pthread_mutex_lock(&mutex);
while (count == 1)
Pthread_cond_wait(&cond, &mutex);
put(i);
Pthread_cond_signal(&cond);
Pthread_mutex_unlock(&mutex);
}
}
void *consumer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
Pthread_mutex_lock(&mutex);
while (count < 2)
Pthread_cond_wait(&cond, &mutex);
int tmp = get();
Pthread_cond_signal(&cond);
Pthread_mutex_unlock(&mutex);
printf("%d\n", tmp);
}
}
Which of the following is true about our code?
If 100 threads all increment a shared variable that is initialized to 0, which of the following answers most accurately represents the possible value(s) of the shared variable after all threads finish? (Note: each thread will increment the variable exactly 1 time)
Recall that demand paging is when we bring a page into memory when it is accessed, and prefetching brings extra pages into memory according to some policy. An example of a prefetching policy is when we need page P, we also bring in pages P+1 and P+2 into memory since they are likely to be accessed as well.
When traversing a large linked list, which prefetching policy is most likely to improve performance by reducing page faults?