2. OUR SOLUTION
We propose a new cache architecture called the location cache.
Figure 1 illustrates its structure. The location cache is a small
virtually-indexed direct-mapped cache. It caches the location information
(the way number in one set a memory reference falls
into). This cache works in parallel with the TLB and the L1 cache.
On an L1 cache miss, the physical address translated by the TLB
and the way information of the reference are both presented to the
L2 cache. The L2 cache is then accessed as a direct-mapped cache.
There can be a miss in the location cache, then the L2 cache is
accessed as a conventional set-associative cache. As opposed to
way-prediction information, the cached location is not a prediction.
Thus when there is a hit, both time and power will be saved. Even
if there is a miss, we do not see any extra delay penalty as seen in
way- prediction caches.