OS qual 2001 1. (a) On a disk write, the file system has to update both the modified block and the parity block, where each bit in the parity block is the XOR of the corresponding bit in each of the file's data blocks (can be recomputed from the old data block, the old parity block and the new data block). When the disk controller gives a bad block error on read or write, the file system should recompute the bad block from the other blocks and the parity block. (b) With the former approach the file system needs to keep both the new block and the old block in the cache in order to recompute the parity, or it has to reread the old block when the modified block is evicted. With the latter approach the parity block has to be kept in the cache but since there is only one per file, much less cache space is wasted. (c) Always write the data block, then write the parity block. If you find a mismatch on recovery, the recovery mechanism assumes that the data block was written but the parity was not, and simply recomputes the parity block. (d) Write the parity block and then the data block. When a mismatch is found, again recompute the parity block. This means that the updated block is lost and the file reverts to the previous version, which we assume is acceptable. (e) The parity mechanism can handle only a single inconsistency... 2. (a) One advantage of overcommit is that it allows processes to request more memory than they actually use without preventing other processes from being able to allocate memory for themselves. Consider the case where no pages are in core and the swap space is completely full. It would be nice to be able to run a single short-lived process without worrying about it being swapped out. (b) If the kernel needs to allocate more memory, for example to grow a kernel stack or to get a new buffer for network I/O, its requests should always succeed, which may require kicking out pages from physical memory. If there is no room left in the swap partition then the system will have to kill a process. (c) Bad process to kill include those that the kernel depends on to run properly, e.g. the swap daemon, the module loading daemon. Good processes to kill might be those started recently by an interactive user (who might be able to handle the out-of-memory situation by adding swap space or rebooting the machine, for example). 3. Problems: - orthodoxies: every new OS has to run Unix programs, so it has to support traditional Unix API semantics, which severely constrains the design space - qualitative results: improvements in HCI, programmer productivity, security--can't say "approach X gives an N% improvement over approach Y" Fads: - peer-to-peer everything