Power management for large last-level caches (LLCs) is important in chip multiprocessors (CMPs), as the
leakage power of LLCs accounts for a significant fraction of the limited on-chip power budget. Since not
all workloads running on CMPs need the entire cache, portions of a large, shared LLC can be disabled
to save energy. In this article, we explore different design choices, from circuit-level cache organization to
microarchitectural management policies, to propose a low-overhead runtime mechanism for energy reduction
in the large, shared LLC. We first introduce a slice-based cache organization that can shut down parts of
the shared LLC with minimal circuit overhead. Based on this slice-based organization, part of the shared
LLC can be turned off according to the spatial and temporal cache access behavior captured by low-overhead
sampling-based hardware. In order to eliminate the performance penalties caused by flushing data before
powering off a cache slice, we propose data migration policies to prevent the loss of useful data in the LLC.
Results show that our energy-efficient cache design (EECache) provides 14.1% energy savings at only 1.2%
performance degradation and consumes negligible hardware overhead compared to prior work.
Power management for large last-level caches (LLCs) is important in chip multiprocessors (CMPs), as theleakage power of LLCs accounts for a significant fraction of the limited on-chip power budget. Since notall workloads running on CMPs need the entire cache, portions of a large, shared LLC can be disabledto save energy. In this article, we explore different design choices, from circuit-level cache organization tomicroarchitectural management policies, to propose a low-overhead runtime mechanism for energy reductionin the large, shared LLC. We first introduce a slice-based cache organization that can shut down parts ofthe shared LLC with minimal circuit overhead. Based on this slice-based organization, part of the sharedLLC can be turned off according to the spatial and temporal cache access behavior captured by low-overheadsampling-based hardware. In order to eliminate the performance penalties caused by flushing data beforepowering off a cache slice, we propose data migration policies to prevent the loss of useful data in the LLC.Results show that our energy-efficient cache design (EECache) provides 14.1% energy savings at only 1.2%performance degradation and consumes negligible hardware overhead compared to prior work.
การแปล กรุณารอสักครู่..
