Computer Science 161
March 11, 2014
82 minutes/82 points
The points allocated to each problem correspond to how much time
we think the problem should take.
Please send your answers to me in a PLAIN TEXT file named your-lastname.txt
as an attachment - with the string "your-lastname" replaced with your
(In a perfect world,
your text file would not have lines longer than 80 characters, but
I'll reformat it if it does.) Label the answers clearly so it is
obvious what answers correspond to what questions. Please place
your answers in order in the file. Send your completed exam to
This exam is open text book, open source code, open notes, and open
course web site. You
may not use computers (cell phones, PDAs, or any other device that can
talk to a network)
during the exam (other than to record your answers, to view OS/161
or System/161 code or documentation, or to examine your notes or the
course web site).
You must complete the exam on your own, without assistance.
Exam Policy Statement
At the end of your exam, assuming you have followed the rules of the exam,
please write the following statement and
then place your name after it, asserting that you have followed the
rules of the exam. "I have neither given nor received help on this exam,
and I have not used any disallowed materials in the completion of this exam."
1. True False (20 points)
Answer true if the statement is always true; otherwise answer false.
- A trap causes a mode change from user mode to kernel mode. F: a trap
could happen when you are already in kernel mode.
- Locks and binary semaphores provide identical functionality. F: a lock
must be unlocked by the same thread that locked it; the same is not true of a binary semaphore.
- Most operating systems use Shortest Time to Completion First (STCF) scheduling algorithms since STCF
is the optimal scheduling algorithm. F: It would be nice, but requires predicting
- The operating system frequently insulates application programmers from changes in hardware.
T: It may not alays succeed, but quite often it does.
- Trap handlers execute in supervisor mode.
T: The trap puts the processor in supervisor/kernel mode.
- Thread (or threads) plus address space = process.
T: Processes encapulate both address spaces and threads of execution
- (Students) Working on problem sets is a preemptible activity.
T: Most definitely!
- Skydiving (once you've left the plane) is a preemptible activity.
F: Unless you can think of a way to stop freefall!
- It is possible to switch threads at either user level or kernel level.
T: Purely user-level threads can switch at user-level and kernel threads can
switch at kernel level.
- Virtual memory allows processes larger than main memory to run.
T: It also allows combinations of processes whose total memory needs exceed
memory to run and it lets processes run without some of their pages in memory.
2. Short Answer (24 points)
Answer the following questions in as few sentences as necessary.
- (5 points) What is the difference between a program (an executable file) and a process and how is a
program transformed into a process?
A process is an instantiation of a program. A program is
transformed into a process using loadelf.c, which takes care of instantiating things that don't
appear in the program itself, e.g., the heap, the stack.
- (3 points) Professor Seltzer is playing with kittens. She throws out a
yarn ball and the kittens all race to fetch it, but only one succeeds. There are 12 kittens.
After playing with the ball for a moment, the kitten brings the yarn ball back, returns it to
Professor Seltzer and gets a treat.
Which of deadlock, race condition, or starvation can occur? Please explain.
I'd call this starvation, because there is no way of selecting which kitten
gets the ball, so with twelve kittens and one ball of yarn, it's entirely possible that one
small or slow kitten never gets the ball, never gets a treat (and literally starves as well as
starvation in the operating system sense).
- (4 points) Explain, in terms of thread switches and domain crossings, what a process switch
1) domain crossing from user level process into the kernel
2) thread switch
3) domain crossing from kernel into user level.
- (4 points) Explain how a scheduling algorithm could produce good throughput, but poor
response time. Then explain how a scheduling algorithm could produce good response time,
but poor throughput.
Good throughput but poor response time: FIFO with time slices -- assuming
the time slice is sufficiently large that process switch time does not dominate, this will keep
the processor busy and produce good overall system throughput, but none of the jobs will finish
until all the other jobs have run through their time slices, so response time is likely to be
somewhat pathetic. Good response time: poor throughput: Multilevel feedback queues when there are
many, many interactive jobs and some long-running jobs - the interactive jobs will give good response
time, at the expense of switching to those jobs often, thus incurring somewhat large process
- (3 points) Why don't we typically allow user level processes to disable interrupts?
If we allowed user processes to disable interrupts, a naught process
could simply take over the system and prevent any other process from ever running again.
- (5 points) Explain how you would enhance a multi-level feedback queue scheduler to support
user-specified process priorities. That is, a user has a way to indicate that some processes
are more important than other processes.
I would use priorities as some sort of "fudge factor" that would move them onto
higher priority queues. There are some design tradeoffs you might make. For example, you could
place a process with priority P on the queue that ensures that no processes with lower (user) priority
are on higher priority queues. This makes user priorities dominant over the MLFQ priorities.
Alternately, you could just move a process up some number of queues depending on the magnitude of
its user priority.
3. Synchronization (20 points)
For each synchronization problem described below, select the best
synchronization primitives to solve the problem. You may use each
primitive as many times as you need to. Briefly, explain why you
chose the primitive you did. Select the synchronization
primitive from the following list:
- Counting semaphore
- Binary semaphore
- Lock (w/out a CV)
- Lock and condition variable
- Your dorm has 6 washing machines and you need to make sure there are no more
users of the machines than there are machines.
Counting semaphore: we didn't ask for any scheduling, just making
sure that we had the right number of users.
- The dryers in your dorm operate for only ten minutes at a time.
Everyone agrees that only one person's clothing should be in the dryer at
any time and that you can only take someone's clothes out of
a dryer if they are completely dry.
Lock plus CV: Now you want to arbitrate access to the dryers, but
not until the clothes in a dryer are dry (e.g., dryness would be your condition).
- Margo's cats like to sleep on her hip, but only one cat can do that
at any one time. If too many cats are trying to sleep on her hip, then
a cat fight ensues. How should the cats synchronize access to their
favorite sleeping spot?
Lock: Whichever cat gets the lock gets the hip. When a cat leaves voluntarily
or gets involuntarily dislodged (by the sleeper), it would relinquish the lock.
- A server process needs to block when there is no work to be done and
run when there is work to be done.
Lock + CV: The condition would be work to be done
- You are implementing a synchronization manager for a database. A process cannot
access a record while another process is accessing it. When a process completes
its work with a record, if there are multiple processes that next wish to access the
record, the process relinquishing access may want to select a particular process to run next.
Binary semaphore: I woul give each thread its own semaphore and when the
record it wanted wasn't ready, it would block on its own semaphore Then the process releasing
a record would pick the process that it wants to run and do a V on that process's semaphore.
4. Virtual Memory (18 points)
The Stoopid architecture has a 12-bit, segmented, virtual address space.
The top three bits of an address identify a segment.
The next four identify a page number and the
last five are offsets.
- (2 points) How large (in bytes) are pages in this architecture?
25 = 32 bytes.
- (8 points) You are told that the architecture has a TLB, but that they haven't
quite worked out the details of a) what each TLB entry looks like, and
b) how many TLB entries they should include.
Propose a TLB entry design; then make (and justify) a suggestion for how
many TLB entries they should have (there is no a priori right answer for this
second question -- we're looking for the thought process behind picking a number).
I'm going to translate the combination of segment and page in the TLB since they are
both so tiny. So, I would need a 7-bit virtual page number, a physical page number --
let's say 59 bits because I have a HUGE physical memory [you could really have any size here],
and then a valid bit. Of course, I could also add things like permission bits (read, write,
execute) and/or address space IDs.
Now, in terms of how many entries, as mentioned in the question, there is no right
Given that I've decided that I have an enormous physical address space, I'd like to have as
many entries as possible.
BUT I did not include address space IDs (I merely mentioned the possibility). So, the maximum
number of unique entries I could have in the TLB would be 2^7, so I guess I'd put 128 entries
in my TLB.
- (2 points) Assume that the architecture supports paging. How many entries would you expect
to find in a page table?
I would expect a page table per segment, so that would mean there are
24 pages in a segment and therefore 24 entries in the page table.
- (6 points) Stoopid Inc. has asked you to come in and consult for them. They know that NULL pointers
are an endless source of bugs. They are trying to decide whether they can/should make the
hardware guarantee that NULL pointers always cause faults or leave it to the operating system
to enforce that.
1) I could hardwire an entry in the TLB that has virtual page number 0 (i.e., 7 bits of 0's) to
always be invalid. This would make the VAS essentially 1 "page" (32 bytes) smaller.
- Would it be possible for the hardware to guarantee that an access to the memory location referenced by a pointer whose value is NULL always generates faults? (How?)
- If the hardware designers either can't or don't provide that support,
how would you design your operating system to
make sure that accesses to the location referenced by a NULL pointer always generates a fault?
2)Alternately, the OS could make sure that page 0 of every process was not mapped.