Data Plane
The data plane handles incoming datagrams (on the wire, fiber, or in wireless media)
through a series of link-level operations that collect the datagram and perform basic
sanity checks. A well-formed (i.e., correct) datagram3 is processed in the data plane by
performing lookups in the FIB table (or tables, in some implementations) that are programmed
earlier by the control plane. This is sometimes referred to as the fast path for
packet processing because it needs no further interrogation other than identifying the
packet’s destination using the preprogrammed FIB. The one exception to this processing
is when packets cannot be matched to those rules, such as when an unknown destination
is detected, and these packets are sent to the route processor where the control plane
can further process them using the RIB. It is important to understand that FIB tables
could reside in a number of forwarding targets—software, hardware-accelerated
software (GPU/CPU, as exemplified by Intel or ARM), commodity silicon (NPU, as
exemplified by Broadcom, Intel, or Marvell, in the Ethernet switch market), FPGA and
specialized silicon (ASICs like the Juniper Trio), or any combination4—depending on
the network element design.
The software path in this exposition is exemplified by CPU-driven forwarding of the
modern dedicated network element (e.g., router or switch), which trades off a processor
intensive lookup (whether this is in the kernel or user space is a vendor-specific design
decision bound by the characteristics and infrastructure of the host operating system)
for the seemingly limitless table storage of processor memory. Its hypervisor-based
switch or bridge counterpart of the modern compute environment has many of the
optimizations (and some of the limitations) of hardware forwarding models.
Historically, lookups in hardware tables have proven to result in much higher packet
forwarding performance and therefore have dominated network element designs, particularly
for higher bandwidth network elements. However, recent advances in the I/O
processing of generic processors, spurred on by the growth and innovation in cloud
computing, are giving purpose-built designs, particularly in the mid-to-low performance
ranges, quite a run for the money.
16 | Chapter 2: Centralized and Distributed Control and Data Planes
5. There are many (cascading) factors in ASIC design in particular that ultimately tie into yield/cost from the
process and die size and flowing down into logic placement/routing, timing and clock frequency (which may
have bearing on the eventual wear of parts), and table sharing—in addition to the power, thermal, and size
considerations.
6. There are many examples here, including the aforementioned OAM, BFD, RSTP, and LACP.
The differences in hardware forwarding designs are spread across a variety of factors,
including (board and rack) space, budget, power utilization, and throughput5 target
requirements. These can lead to differences in the type (speed, width, size, and location)
of memory as well as a budget of operation (number, sequence, or type of operations
performed on the packet) to maintain forwarding at line rate (i.e., close to the maximum
signaled or theoretical throughput for an interface) for a specific target packet size (or
blend). Ultimately, this leads to differences in forwarding feature support and forwarding
scale (e.g., number of forwarding entries, number of tables) among the designs.
The typical actions resulting from the data plane forwarding lookup are forward (and
in special cases such as multicast, replicate), drop, re-mark, count, and queue. Some of
these actions may be combined or chained together. In some cases, the forward decision
returns a local port, indicating the traffic is destined for a locally running process such
as OSPF or BGP6. These datagrams take what is referred to as the punt path whereby
they leave the hardware-forwarding path and are forwarded to the route processor using
an internal communications channel. This path is generally a relatively low-throughput
path, as it is not designed for high-throughput packet forwarding of normal traffic;
however, some designs simply add an additional path to the internal switching fabric
for this purpose, which can result in near-line rate forwarding within the box.
In addition to the forwarding decision, the data plane may implement some small services/
features commonly referred to as forwarding features (exemplified by Access Control
Lists and QoS/Policy). In some systems, these features use their own discrete tables,
while others perform as extensions to the forwarding tables (increasing entry width).
Additionally, different designs can implement different features and forwarding operation
order (Figure 2-4). Some ordering may make some feature operations exclusive
of others.
With these features, you can (to a small degree) locally alter or preempt the outcome of
the forwarding lookup. For example:
• An access control li