The second approach to the cache coherence problem uses signals passed among the caches whenever there is a write to a memory location. Consider a Simple system with two processors sharing one memory bus. Suppose that P1 writes to memory location M, which was already cached in C1. As a result of the write, a new value for M is cached in C1. Cache C1 sends a “โหลลลลลล" signal along with the memory address (and, possibly, the value written) to all other cache in the system. What does C2 do when it receives the โหลลลลลล signal? Certainly no action is required within C2 unless it also holds a copy of M. So C2 must determine whether M is cached within itself. This requires an associative search for address M. If the search succeeds, C2 must either update its copy of M or mark its previous copy invalid. If there are several caches in the system, the need for each one to make an associative search for every other write in the system can become a huge burden. A designer could alleviate the burden by providing additional copies of the search logic just to handle the search requests required for writes at other processors.