NDS qual 1998 1. each workstation has 1 speaker 1 note at a time 4 octave range (major issue) timing: Assuming the music file contains some absolute start time (say, start playing at 11:30 PM) and the notes are relative to the start time, there needs to be some way to synchronize the clocks at each workstation. Could use a protocol like NTP to do this. fault tolerance: If the workstations are very flaky and crash often, it might be necessary to have them reorganize what parts they play if several machines crash leaving too few for a particular part. This requires a way to detect a crashed machine (say, a keepalive from the coordinator to each workstation) and a coordination protocol to reorganize after a crash. scalability: use multicast 2. (a) The main issue to consider is for each page the ratio of update frequency to request frequency. A high ratio of updates to requests favors approach (i) since there is no point pushing updates to clients that are going to ignore them (because of both network overhead and the space overhead of keeping track of who each client is). A low ratio of updates to requests favors approach (ii). other considerations: - file size (for tiny files, checking for cache freshness may be just as costly as getting the whole file) - consistency requirements (if strict consistency is not required, then a client need not check for cache freshness on every reference, thereby making (i) a more appealing option) - overlap of file accesses (if lots of clients access the same file, multicast is worthwhile) (b) Other issues need to be considered if clients can also change pages: - consistency requirements could be tighter if clients are communicating via updates to web pages - optimistic vs pessimistic locking when a page is being modified - what to do if a client machine crashes while holding a lock - if number of writers reaches a certain level it pays to send all modifications to the central server 3. Secure storage has different requirements from secure communication. - susceptibility to offline hacking: if a bad guy gets hold of a disk he can hack on it for months, determine a key, and extract a potentially large amount of secret data; in the communication case, keys can be changed frequently, decreasing both the risk of key recovery and the reward of decrypting huge amounts of data with it - aging of security protocols: encrypted stored data may still be around years after the algorithm used to encrypt it has been broken (a la DeCSS); communication is by nature transient - key management is harder when many people can access the data - authorization and access control also needed