ABSTRACT
While set-associative caches incur fewer misses than direct mapped
caches, they typically have slower hit times and higher
power consumption, when multiple tag and data banks are probed
in parallel
This paper presents the location cache structure
which significantly reduces the power consumption for large set associative
caches.
We propose to use a small cache, called location
cache to store the location of future cache references. If there
is a hit in the location cache, the supported cache is accessed as a
direct-mapped cache. Otherwise, the supported cache is referenced
as a conventional set-associative cache.
The worst case access latency of the location cache system is the
same as that of a conventional cache. The location cache is virtually
indexed so that operations on it can be performed in parallel with
the TLB address translation. These advantages make it ideal for L2
cache systems where traditional way-predication strategies perform
poorly.
We used the CACTI cache model to evaluate the power consumption
and access latency of proposed cache architecture. Simple scalar
CPU simulator was used to produce final results. It
is shown that the proposed location cache architecture is power efficient.
In the simulated cache configurations, up-to 47% of cache
accessing energy and 25% of average cache access latency can be
reduced.