OS qual 2000 1. fixed allocation advantages: - preallocation guarantees that when a resource is needed, it will be available (although it doesn't necessarily improve fault tolerance) - can protect against malicious users who would hog all of the resources if they could dynamic allocation advantages: - allows whoever needs a resource to use it if it is available, which can result in better overall utilization - during periods of heavy use, users can adapt to a lower quality but less resource-intensive mode so that more users can be served The plain old telephone system uses fixed allocation by setting up virtual circuits. This approach is better suited to the quality of service that people have become accustomed to: either you complete a call with a constant quality level, or you don't get a dial tone in the first place. The Internet uses mainly dynamic allocation via packet switching. This approach is suited to a heterogeneous network with many different kinds of applications with varying bandwidth demands and transient connections. Other examples: real-time OS (fascist); TDMA (fascist); CDMA (happy-go-lucky) 2. If the architecture provides segmentation, then each thread's stack could be placed in its own segment. Using a typical paging system, when the stack is exceeded it will generate a page (mapping) fault, which the OS can respond to by allocating a new page to grow the stack. Without segmentation, the various stacks would have to be placed in the same virtual address space, hopefully leaving enough room between them so that they rarely exceed their boundary (easy to do in a 64-bit space). When they do exceed their boundary, the OS would have to copy the entire stack to another, larger region of memory. This approach would work best on architectures where all stack memory references can be expressed as relative offsets from a stack pointer (otherwise page protection can be used to trap any time a stack page other than the current one is accessed). Another approach: linked list of stack frames, with a call to grow the stack each time a frame is added. 3. (Assuming that both parts refer to a blindingly fast file copy application.) In both cases, there will be two areas of memory allocated, one for the original file and the other for the copy. In the read/write case, there are two buffers for each block, and in the memory mapped case there are two copies in the cache of each page. A similar problem occurs for reading and writing network packets. System call possibilities: (a) one call copies an entire file (b) remap call that remaps a page of a mapped file to a new file 4. A crash while writing a file to disk could cause any of the following: (a) incomplete data (b) new data with old inode (c) new inode with old data On a typical disk there is some write buffering, so if a crash happens before any data hits the disk then you will be okay. Also, as long as writing a single sector is extremely fast it can be treated as an atomic operation, so the only inconsistencies would be between data and inode. An atomic update could be formed by: (a) copy the file (b) write a brand-new copy of the file, sync (c) delete the old file (d) rename the new copy to the old file If a crash occurs during (d), then at least the copy from (a) still exists.