As a generalization, though, layer 2 and layer 3 scaling concerns and their resulting
control plane designs eventually merge or hybridize because layer 2 networks ultimately
do not scale well due to the large numbers of end hosts. At the heart of these issues is
dealing with end hosts moving between networks, resulting in a massive churn of forwarding
tables—and having to update them quickly enough to not disrupt traffic flow.
In a layer 2 network, forwarding focuses on the reachability of MAC addresses. Thus,
layer 2 networks primarily deal with the storage of MAC addresses for forwarding purposes.
Since the MAC addresses of hosts can be enormous in a large enterprise network,
the management of these addresses is difficult. Worse, imagine managing all of the MAC
addresses across multiple enterprises or the Internet!
In a layer 3 network, forwarding focuses on the reachability of network addresses. Layer
3 network reachability information primarily concerns itself with the reachability of a
destination IP prefix. This includes network prefixes across a number of address families
for both unicast and multicast. In all modern cases, layer 3 networking is used to segment
or stitch together layer 2 domains in order to overcome layer 2 scale problems. Specifically,
layer 2 bridges that represent some sets of IP subnetworks are typically connected
together with a layer 3 router. Layer 3 routers are connected together to form larger
networks—or really different subnetwork address ranges. Larger networks connect to
other networks via gateway routers that often specialize in simply interconnecting large
networks. However, in all of these cases, the router routes traffic between networks at
layer 3 and will only forward packets at layer 2 when it knows the packet has arrived at
the final destination layer 3 network that must then be delivered to a specific host.
Some notable blurring of these lines occurs with the Multiprotocol Label Switching
(MPLS) protocol, the Ethernet Virtual Private Network (EVPN) protocol, and the Locator/
ID Separation Protocol (LISP). The MPLS protocol—really a suite of protocols—
was formed on the basis of combining the best parts of layer 2 forwarding (or switching)
with the best parts of layer 3 IP routing to form a technology that shares the extremely
fast-packet forwarding that ATM invented with the very flexible and complex path
signaling techniques adopted from the IP world. The EVPN protocol is an attempt to
solve the layer 2 networking scale problems that were just described by effectively tunneling
distant layer 2 bridges together over an MPLS (or GRE) infrastructure—only
then is layer 2 addressing and reachability information exchanged over these tunnels
and thus does not contaminate (or affect) the scale of the underlying layer 3 networks.
Reachability information between distant bridges is exchanged as data inside a new BGP
address family, again not contaminating the underlying network. There are also other
optimizations that limit the amount of layer 2 addresses that are exchanged over the
tunnels, again optimizing the level of interaction between bridges. This is a design that
minimizes the need for broadcast and multicast. The other hybrid worth mentioning is
LISP (see RFC 4984). At its heart, LISP attempts to solve some of the shortcomings of
14 | Chapter 2: Centralized and Distributed Control and Data Planes
the general distributed control plane model as applied to multihoming, adding new
addressing domains and separating the site address from the provider in a new map
and encapsulation control and forwarding protocol.
At a slightly lower level, there are adjunct control processes particular to certain network
types that are used to augment the knowledge of the greater control plane. The services
provided by these processes include verification/notification of link availability or quality
information, neighbor discovery, and address resolution.
Because some of these services have very tight performance loops (for short event detection
times), they are almost invariably local to the data plane (e.g., OAM)—regardless
of the strategy chosen for the control plane. This is depicted in Figure 2-3 by showing
the various routing protocols as well as RIB-to-FIB control that comprises the heart of
the control plane. Note that we do not stipulate where the control and data planes reside,
only that the data plane resides on the line card (shown in Figure 2-3 in the LC box),
and the control plane is situated on the route processor (denoted by the RP box).
Figure 2-3. Control and data planes of a typical network device
What Do They Do? | 15
3. Some implementations do additional sanity checks beyond proper sizing, alignment, encapsulation rule adherence,
and checksum verification. In particular, once a datagram “type” has been identified, additional
“bogon” rules may be applied to check for specific violations for the type.
4. It is not uncommon for hardware platforms to have an “overflow