OS qual 1999 1. (Assuming "switch a capability list" means switching the currently-used capability list) (a) A capability list must be switched on each process switch. The OS adds new read/write entries to a process' list for each byte allocated to the process, and removes them when the byte is freed. It could also add an entry when one process grants shared access to a byte to another process. Read capabilities could be granted to each process for each kernel routine it accesses. (If the OS does swapping on a finer grain than entire processes, the OS would have to keep a separate list of swapped-out capabilities, which get copied to the main table when the corresponding bytes are swapped in.) (b) The process could be denied a capability to its own (or anyone else's) capability list, so any attempt to access it would fail. (c) Other reasons for having virtual memory: - more flexible management: extra level of indirection means chunks of physical memory (pages) can be moved around or swapped to disk transparently - access more than the available physical memory (d) A hardware TLB could keep recently used capabilities, possibly tagged with the process ID to prevent flushing on each context switch. Could use extents, i.e. have each TLB entry cover a range of capabilities by providing a start and end address, to take advantage of locality. 2. Advantages of client caching: - performance: a cache at the client avoids transferring data over the network on every access - fault tolerance: the client can use cached data if the server is temporarily unavailable Problems with client caching: - client crashes while holding a lock: use leases instead of infinite locks - heavy write sharing can cause more traffic than if the data were kept centrally: don't do that - false sharing - conflict resolution 3. (a) Two effects of crash during file creation: - directory block and inode written, free list not yet written: the free list still contains the used inode, which means another directory entry could end up pointing to the same inode (i.e. an unexpected hard link) - directory block written, inode and free list not yet written: the directory block points to garbage, so the file shows up in the directory with invalid attributes and data (b) Perform the operations in the following order, syncing after each one to guarantee that they are written to disk in the same order: - write the free list to disk - write the new inode - write the modified directory block If a crash occurs, at worst there will be an entry missing in the free list and/or a useless inode, both of which could be cleaned up during the reconstruction (fsck) operation on the next boot. (c) When creating the file, write the inode number and the disk address of the directory entry to NVRAM. After writing them to disk, HFS clears the NVRAM. If a crash occurs before the new information is written to disk, the reconstruction procedure retrieves the inode address from NVRAM. If the directory entry contains the inode address, the directory entry is removed. If the free list does not contain the inode address, the inode is added to the free list. (Need a timestamp to verify that the inode is valid?) 4. (a) - virtual memory paging algorithms: estimate the working set of each process to keep recently used pages in physical memory - TLB management: keep the most recently used page table entries in the TLB - buffer cache - current working directory - cylinder groups (b) If the OS could predict exactly when each page in RAM will be accessed next, when selecting a page for eviction it would choose the one that will be accessed farthest in the future. This would eliminate the need for access bits and fancy working set algorithms. (If computers knew they were going to crash they could avoid consistency problems, etc.)