Cache are basically fast memory but small in size. Cache is close to CPU and every data that have to be processed is first fetched from memory to cache and then send to registers in CPU for processing but How data is stored or mapped from memory to cache?
It can be done in following way:
- Fully Associativity: That is any entry from memory could be stored at any address in cache. But this is type of caches are extremely difficult to make as all the address of cache have to be mapped to all the address in memory and thus would be more expensive and slow. But this would minimise the cahe miss rate.
- Direct mapping: That is data in memory with specific number at the end of the address would only be mapped to specific address in cache likeMemory adress like 0,1,2,3,4,5,6,7,8,9,A,B….1A… EtcAnd cache have address like 0,1,2,….F. So, at address 0 in cache only data from memory with address ending with 0 would be mapped or copied.
This method make cache cheap, less complex and fast but it increases cache miss rate.
- K- way direct Associativity: This hybrid of above two methods. In this method, cache is divided into k sets and each set have direct mapping as explained above. But this method decreases cache miss rate as unlike in direct mapping where you have only one slot in cache for a memory address, know you have k slots where data from particular address to could go.
And our normal caches have 8 way Associativity.