Introduction
A cache replacement algorithm is the algorithm that decides what cache elements to evict when a cache reaches it’s capacity. There are many replacement policies and cache configurations implemented over years and there are few policies and configurations that are proven to be efficient. In our project we mainly focus on understanding the existing norms by trying out different cache configurations and study its performance. This analysis will be done with the help of GEM5 simulator, using benchmark with varying workloads and report its performance metrics. In the project we have taken four major components that affect the cache performance - cache associativity, memory hierarchy, cache replacement algorithm and cache size. We have gone one step ahead by trying out various configurations that are different from the norm.
Research Analysis on Cache
While a lot of research has been done in the field of cache memories, some important questions about cache replacement policies applied to the state-of-art workload still remain without complete answers. In this section we list the well-known observations about replacement policies, along with related questions to which our study offers answers.
Memory Hierarchy and Cache Levels
Memory hierarchy in general is designed in such a way that having lower and faster cache near the processor and bigger and slower cache at the memory side to yield better performance. In this project we have experimented by swapping the L2 and L3 cache and found interesting behavior; the new configuration being L1 cache, L3 and L2 respectively. With the above-mentioned configuration we also tested it by making L3 and L2 caches exclusive.
Replacement Algorithms
The cache replacement algorithms have a strong bias to the nature of the program and performance as a whole. To understand this better, we wanted to break the existing norm of having same replacement algorithms at all levels and try different cache replacement policies at different levels.
Associativity
Larger sets and higher associativity lead to fewer cache conflicts and lower miss rates, but they also increase the hardware cost. So, coming up with the right associative value for each cache level is a challenge and hence we wanted to study by varying the values and will provide the one that shows good performance.
Inclusive And Exclusive Cache
Multi-level caches can be designed in various ways depending on whether the content of one cache is present in other level of caches. If all blocks in the higher-level cache are also present in the lower level cache, then the lower level cache is said to be inclusive of the higher-level cache. If the lower level cache contains blocks that are not present in the higher-level cache, then the lower level cache is said to be exclusive of the higher-level cache.
Tool Used
Gem5 Simulator, Google Charts, Parser, CS Virgina Tech Benchmark