The design of our NoCMsg layer avoids deadlocks in a generalized fashion.We utilize
a polling work loop that cycles between computation, sending, and receiving of data.
In this work loop, a buffered message is sent only if sufficient credits are available
in the output queue; otherwise, the credit check is repeated in the next iteration of
the work loop. When the head of a message opens a wormhole path, NoC links on the
path remain reserved until the full payload (encoded in the head) has been received,
which severely limits the number of concurrent paths. If two (or more) paths share
links or endpoints, then the later one will block at an input queue of a switch. As this
later sender submits more flits, it would eventually experience head-of-line blocking,
which would block the sender in the work loop and prevent it from receiving more
flits, which could result in deadlocks or circular send/receive dependencies. To prevent
such blocking, we no longer send flits when the output queue at the sender is full. But
we still receive flits, which ensures that no deadlock can occur since receives reduce
backpressure.
The design of our NoCMsg layer avoids deadlocks in a generalized fashion.We utilizea polling work loop that cycles between computation, sending, and receiving of data.In this work loop, a buffered message is sent only if sufficient credits are availablein the output queue; otherwise, the credit check is repeated in the next iteration ofthe work loop. When the head of a message opens a wormhole path, NoC links on thepath remain reserved until the full payload (encoded in the head) has been received,which severely limits the number of concurrent paths. If two (or more) paths sharelinks or endpoints, then the later one will block at an input queue of a switch. As thislater sender submits more flits, it would eventually experience head-of-line blocking,which would block the sender in the work loop and prevent it from receiving moreflits, which could result in deadlocks or circular send/receive dependencies. To preventsuch blocking, we no longer send flits when the output queue at the sender is full. Butwe still receive flits, which ensures that no deadlock can occur since receives reducebackpressure.
การแปล กรุณารอสักครู่..