Back to index
Fragmentation Considered Harmful
Christopher A. Kent, Jeffrey C. Mogul, DEC WRL
One-line summary:
Although designers of IP intended for fragmentation to be a
"transparent" lower-layer mechanism to facilitate interoperation of
different networks, in practice its effects are visible to
upper layers in various evil ways.
Overview/Main Points
- Poor performance: Deterministic fragment loss can occur
when specific frag(s)
always lost, e.g. some router can't handle back-to-back
consecutive packets. Eventually causes livelock.
- Efficient Reassembly difficult: no way to tell at IP layer how
many other fragments follow, or length of entire dgram. If total
size too large to fit in buffers, you lose.
- Avoiding fragmentation:
- Always send small datagrams. How small?
- Guess path MTU "using a heuristic" (e.g. always send
minimum-size if going through a gateway; breaks for proxy-ARP)
- Discover path MTU (authors propose a protocol)
- guess/discover MTU and backtrack if wrong: many roundtrips
may pass before progress is made.
- What if packet routing is nondeterministic, or multiple
routes exist to destination?
- Path MTU's must be maintained per connection, to allow
applications to choose among multiple routes (when available) for
different traffic types, etc. Since many (most?) networks are
subnetted, knowing the MTU of some host doesn't necessarily tel
you MTU of other hosts on same network.
- Authors propose a probe mechanism to discover various quantities
that can be used to modify network-layer behaviors: bottleneck
bandwidth and delay, longest queue length, error rate, hop
count. They believe that future networks should be designed so
that this info can be captured for every packet.
- IP implementation things that could help:
- Proper use of "time exceeded" ICMP message could be set to
indicate that reassembly timer expired (some fragments
didn't arrive, etc.) Few IP implementations know how to
handle this.
- "Fragmentation warning" ICMP message could be sent to
originating host if router fragments a packet. Danger: unless
source host receives the message and acts on it,
fragmentation will continue and much bandwidth will be
wasted. (In general, this seems like a bad idea for a
best-effort network.)
- Use of transparent fragmentation at a lower layer: authors
suggest this because deterministic frag loss unlikely, and
fragmented packets will be transparently "reassembled" as they
leave some router, allowing more efficient use of larger MTU's
downstream. To me this seems a patent violation of e2e.
- Some protocols, e.g. NFS, do intentional fragmentation. Authors
denounce as a bad idea, and I agree.
- Summary recommendations not requiring protocol changes:
- Restrict dgrams to 576 bytes if going through a GW.
(Seems naive to me, given how many packets go through at
least one GW even on "local" networks)
- Transparent fragmentation (I think violating e2e this way
is asking for trouble)
Relevance
Fragmentation is not as "transparent" as IP layer would have you believe.
Flaws
- How many of the arguments against are a result of a poor choice
of implementation of fragmentation in IP, as opposed to
the concept of
fragmentation? (e.g. what if all frags contained
total dgram length?)
- IMHO, collecting (via probes) the info as authors suggest, and
using it to regulate behavior at the network layer, doesn't seem
compellingly useful and seems to violate the end-to-end argument
(and to collect on every packet, as authors suggest, requires
router cycles and endpoint state that are not obviously worth
allocating).
- In general, the "band-aids" suggested seem fraught with peril,
and authors haven't argued that the "harm" (and frequency!) of
fragmentation is sufficient to merit these risky tactics.
Back to index