<simple_cache_tool>.add-l1i-cache ["name"] [-no-issue] [line-size] sets [ways] [read-penalty] [read-miss-penalty] [write-penalty] [write-miss-penalty] [prefetch-additional] [-write-through] [-no-write-allocate] [-prefetch-adjacent] [-ip-read-prefetcher] [-ip-write-prefetcher]
Add a level 1 instruction cache to all connected processor. Each hardware thread in the same core will be connected to the same level 1 cache. The command also created an extra namespace, cache[N] for each core where the cache hierarchy will be created. The -no-issue flag means that the connection will not issue any instruction accesses to the cache. This can be useful if the instruction cache should be called from another tool instead, for instance a branch predictor. Also, if the cache block for an instruction is the same as last access, no instruction cache issue will be done, thus modeling that several instructions can be read from the same cache block. name can be given to set a name for the cache object in the hierarchy. line-size is cache line size (default 64 bytes), sets number of sets/indices, and ways number of ways (default 1).
You can configure the cache to be a write through cache by giving the -write-through flag, default is a write back cache.
To prevent the cache from allocating lines on writes use the -no-write-allocate flag.
The read-penalty, read-miss-penalty, write-penalty, write-miss-penalty sets the penalties in cycles for cache accesses and misses, respectively. The read/write penalty is added to the total penalty when accessing the cache (i.e., the cost of reaching the cache), a miss penalty is added when a miss occurs and there is no next level cache.
If prefetch-additional is given, the cache will prefetch additional consecutive cache lines on a miss.
-prefetch-adjacent means that the cache will, on a miss, prefetch the adjacent cache line as well, so the total fetch region is cache line size * 2, naturally aligned.
-ip-read-prefetcher and -ip-write-prefetcher adds a hardware instruction pointer based stride prefetcher for reads and writes respectively. Write prefetching will issue read for ownership prefetch accesses to the cache, meaning that other caches having those lines will be forced to flush them.
simple-cache-tool
<simple_cache_tool>.add-l2-cache,
<simple_cache_tool>.add-l3-cache,
<simple_cache_tool>.add-l3-cache-slice,
<simple_cache_tool>.list-caches