NDS qual 1996 1. (Despite the point about hosts being able to forward packets to other hosts, it isn't clear whether routing is being done at the link layer or the network layer. I'll assume it's being done at the link layer.) (a) Simulate broadcast by unicasting an ARP to each neighbor: ARP who-is (destination IP address) tell (source IP address)/(source HW addr) ttl (whatever) When a host receives an ARP, if it knows the answer it sends a reply to the source HW address: ARP reply (destination IP address) is (destination HW address) If a host does not know the answer, it decrements the TTL and sends the ARP to all of its neighbors. If the TTL is already 0 it ignores the ARP. (b) Use the well-known host as a repository for address mappings. Each host unicasts a message informing it of its IP address. Whenever a host needs to know another IP address, it queries the repository. An alternative would be to have the repository query each host when it receives an ARP request, which would save individual hosts from having to register even if they don't plan to communicate with anyone. But this seems unnecessary because the registration needs to be done only once, and few hosts are likely to participate in a wireless network without communicating. Having the info cached at the repository makes lookups faster, as well. 2. (a) freshness System B guarantees coherency, unless a client with a write token modifies a file and then cannot be reached by the server to flush back its modified data. System A provides coherency only beyond a 30-second window, so if client 1 has recently cached a copy of a block and then client 2 modifies it, further reads by client 1 within the next 30 seconds will return the old data. Thus system A is not appropriate for applications expecting consistent concurrent access, like a mail spool with a server appending messages while a client reorders and deletes them. (b) performance Since system A requires all writes to be sent to the server synchronously (and presumably one block at a time), writes are likely to perform poorly. In addition, a client must retrieve the last-modified time of a file every 30 seconds, incurring overhead even if no one else is accessing the file. System B does not suffer these problems. However, since only one client can possess a write token at any particular time, a file that is write- shared among many clients will cause more overhead due to token passing and cache flushing than in system A, where all writes are sent directly to the server. (c) server crash recovery overhead In the event of a crash, system A does not need to perform any cleanup upon recovery, beyond what its local file system requires, since it stores no state about what clients are caching which files. In system B, the server needs to determine which clients had tokens for each file. It can do this either by writing this state to a log in stable storage which is then read upon recovery, or querying each client for the token status. (The latter option permits clients to cheat by claiming a token that they did not actually possess.) 3. (a) Encryption at this level is not appropriate for truly "top-secret" calls because they could be intercepted anywhere between the base station and the other party. True protection against eavesdroppers must be implemented end-to-end. (b) Link-level acknowledgements and retransmission may be appropriate as an optimization, especially in the presence of transport protocols like TCP that make assumptions about the meaning of dropped packets. (However trying too hard at the link layer may be just as bad as dropping packets since long delays are likely to trigger timeouts in the transport layer.) (c) Transport-layer handling of dropped, corrupted and reordered packets is appropriate as an optimization for applications that transfer large files. Although the application-layer checksum is still necessary, corruption is much more likely to occur in the network than elsewhere and retransmitting the entire file due to one corrupted bit is extremely expensive.