How can this be? Well, the Border Gateway Protocol (BGP) never
promised that it would find the shortest route between any two sites;
it only tries to find some route. To make matters more complex, BGP’s
routes are heavily influenced by policy issues, such as who is paying
whom to carry their traffic. This often happens, for example, at peering
points between major backbone ISPs. In short, that the triangle inequality
does not hold in the Internet should not come as a surprise.
How do we exploit this observation? The first step is to realize that there
is a fundamental tradeoff between the scalability and optimality of a routing
algorithm. On the one hand, BGP scales to very large networks, but
often does not select the best possible route and is slow to adapt to network
outages. On the other hand, if you were only worried about finding
the best route among a handful of sites, you could do a much better job of
monitoring the quality of every path you might use, thereby allowing you
to select the best possible route at any moment in time.
An experimental overlay, called the Resilient Overlay Network (RON),
does exactly this. RON scales to only a few dozen nodes because it uses
an n×n strategy of closely monitoring (via active probes) three aspects
of path quality—latency, available bandwidth, and loss probability—
between every pair of sites. It is then able to both select the optimal route between any pair of nodes, and rapidly change routes should network
conditions change. Experience shows that RON is able to deliver modest
performance improvements to applications, but more importantly, it
recovers from network failures much more quickly. For example, during
one 64-hour period in 2001, an instance of RON running on 12 nodes
detected 32 outages lasting over 30 minutes, and it was able to recover
from all of them in less than 20 seconds on average. This experiment
also suggested that forwarding data through just one intermediate node
is usually sufficient to recover from Internet failures.Since RON is not designed to be a scalable approach, it is not possible
to use RON to help random host A communicate with random host
B; A and B have to know ahead of time that they are likely to communicate
and then join the same RON. However, RON seems like a good idea
in certain settings, such as when connecting a few dozen corporate sites
spread across the Internet or allowing you and 50 of your friends to establish
your own private overlay for the sake of running some application.
The real question, though, is what happens when everyone starts to run
their own RON. Does the overhead of millions of RONs aggressively probing
paths swamp the network, and does anyone see improved behavior
when many RONs compete for the same paths? These questions are still
unanswered.
How can this be? Well, the Border Gateway Protocol (BGP) neverpromised that it would find the shortest route between any two sites;it only tries to find some route. To make matters more complex, BGP’sroutes are heavily influenced by policy issues, such as who is payingwhom to carry their traffic. This often happens, for example, at peeringpoints between major backbone ISPs. In short, that the triangle inequalitydoes not hold in the Internet should not come as a surprise.How do we exploit this observation? The first step is to realize that thereis a fundamental tradeoff between the scalability and optimality of a routingalgorithm. On the one hand, BGP scales to very large networks, butoften does not select the best possible route and is slow to adapt to networkoutages. On the other hand, if you were only worried about findingthe best route among a handful of sites, you could do a much better job ofmonitoring the quality of every path you might use, thereby allowing youto select the best possible route at any moment in time.An experimental overlay, called the Resilient Overlay Network (RON),does exactly this. RON scales to only a few dozen nodes because it usesan n×n strategy of closely monitoring (via active probes) three aspectsof path quality—latency, available bandwidth, and loss probability—between every pair of sites. It is then able to both select the optimal route between any pair of nodes, and rapidly change routes should networkconditions change. Experience shows that RON is able to deliver modestperformance improvements to applications, but more importantly, itrecovers from network failures much more quickly. For example, duringone 64-hour period in 2001, an instance of RON running on 12 nodesdetected 32 outages lasting over 30 minutes, and it was able to recoverfrom all of them in less than 20 seconds on average. This experimentalso suggested that forwarding data through just one intermediate nodeis usually sufficient to recover from Internet failures.Since RON is not designed to be a scalable approach, it is not possibleto use RON to help random host A communicate with random hostB; A and B have to know ahead of time that they are likely to communicateand then join the same RON. However, RON seems like a good ideain certain settings, such as when connecting a few dozen corporate sitesspread across the Internet or allowing you and 50 of your friends to establishyour own private overlay for the sake of running some application.The real question, though, is what happens when everyone starts to runtheir own RON. Does the overhead of millions of RONs aggressively probingpaths swamp the network, and does anyone see improved behaviorwhen many RONs compete for the same paths? These questions are stillunanswered.
การแปล กรุณารอสักครู่..