Let us have a more detailed look at simple_cache
. It has the following features:
Transactions are handled one at a time; all operations are done in order, at once, and a total stall time is returned. The transaction is not reissued afterwards. Here is a short description of the way simple_cache
handles a transaction:
If the transaction is an instruction fetch the connection object will send the transaction to the instruction cache, unless the connection has been configured not to send any instruction fetches.
If the transaction is data the connection objects will send the transaction to the data cache.
If the transaction is a prefetch (instruction) a prefetch transaction will be sent to the data cache.
If the transaction is uncacheable, the connection object will ignore it and not send the transaction to the cache.
If the transaction is a read hit, simple_cache
returns read_penalty
cycles of penalty.
If the transaction is a read miss, simple_cache
will use the LRU algorithm to determine the cache line to allocate.
If the replacement cache line needs to be emptied with modified data, a write-back transaction is initiated to the next level cache. In this case, a write_penalty
from the next level cache is added.
The new data is fetched from the next level, incurring read_penalty
cycles penalty added to the penalty returned by the next level. The read_miss_penalty
is also added to the stall time.
The snoop_penalty is added when sending snoop transactions to the other caches (modeled by the directory objects). If the other cache needs to empty a line that penalty is also added.
The total penalty returned is the sum of penalty_read
, plus the penalties associated with the copy-back (if any), plus the penalties associated with the line fetch, plus the read_miss_penalty, and then the snooping penalty.
If the transaction is a write hit, simple_cache
returns write_penalty
cycles of penalty.
If the transaction is a write miss, the write penalty is added, then it depends on if the cache is a write allocate (add penalties for reading the cache line first) or a write through where the transaction is sent to the next level (and write_miss penalty is added).
Instruction fetches and prefeches are handles as the read transactions except that prefetches are not added to the stall penalty.
Note the use of the (read/write)_penalty
and the (read/write)_miss_penalty
, where the first ones are added regardless if it is a hit or a miss (the time to reach the cache) and the second ones are the cost of having an additional miss. Usually, the write penalties may be set to zero to model (unlimited) store buffers.