The document discusses cache memory principles and design. It begins with an overview of the memory hierarchy and how cache memory fits between the CPU and main memory. It then covers key elements of cache design, including cache addressing, size, mapping functions, and replacement algorithms. Mapping functions discussed include direct mapping, associative mapping, and set associative mapping. The document provides examples and diagrams to illustrate these cache mapping techniques.
Outline • Computer MemorySystem Overview Characteristics of Memory Systems The Memory Hierarchy • Cache Memory Principles • Elements of Cache Design Cache Addresses Cache Size Mapping Function Replacement Algorithms Write Policy Line Size Number of Caches • Pentium 4 Cache Organization Semester II 2014/2015 4
5.
Introduction • The complexsubject of computer memory is made more manageable if we classify memory systems according to their characteristics. Semester II 2014/2015 5
6.
Key Characteristics ofComputer Memory Systems Table 4.1 Key Characteristics of Computer Memory Systems Semester II 2014/2015 6
7.
7 1. Location • Refersto whether memory is internal and external to the computer • Internal memory is often equated with main memory • Processor requires its own local memory, in the form of registers • Cache is another form of internal memory • External memory consists of peripheral storage devices that are accessible to the processor via I/O controllers Semester II 2014/2015
8.
8 2. Capacity • Memoryis typically expressed in terms of bytes. • In other words, the capacity of a memory is the total number of bits that can be stored. • Word – A group of bits in memory that represents instructions or data. – The natural unit of organization (1 byte = 8 bits) – Common word lengths – 8, 16, 32 bits • Byte – The smallest unit of binary data is the bit and a collection of 8 bit is known as byte (1 byte = 8 bits) Semester II 2014/2015
9.
9 3. Unit ofTransfer • For internal memory (main memory), this is the number of bits read out of or written into memory at a time. • The unit of transfer need not equal a word on an addressable unit. • For external memory, data are often transferred in units larger than words; referred to as block. Semester II 2014/2015
10.
Sequential access Memory is organizedinto units of data called records Access must be made in a specific linear sequence Access time is variable Direct access Involves a shared read-write mechanism Individual blocks or records have a unique address based on physical location Access time is variable 4. Access Methods (1) Semester II 2014/2015 10
11.
Random access Each addressable locationin memory has a unique, physically wired-in addressing mechanism The time to access a given location is independent of the sequence of prior accesses and is constant Any location can be selected at random and directly addressed and accessed Main memory and some cache systems are random access Associative A word is retrieved based on a portion of its contents rather than its address Each location has its own addressing mechanism and retrieval time is constant independent of location or prior access patterns Cache memories may employ associative access 4. Access Methods (2) Semester II 2014/2015 11
12.
The two mostimportant characteristics of memory Three performance parameters are used: Access time (latency) • For random-access memory it is the time it takes to perform a read or write operation • For non-random-access memory it is the time it takes to position the read-write mechanism at the desired location Memory cycle time • Access time plus any additional time required before second access can commence. • Additional time may be required for transients to die out on signal lines or to regenerate data if they are read destructively. • Concerned with the system bus, not the processor. Transfer rate • The rate at which data can be transferred into or out of a memory unit • For random-access memory it is equal to 1/(cycle time) 5. Capacity and Performance Semester II 2014/2015 12
13.
13 6. Physical Types •Semiconductor – RAM • Magnetic surface memory – Disk & Tape • Optical – CD & DVD • Magneto-optical – Note book computer Semester II 2014/2015
14.
14 7. Physical Characteristics •Decay • Volatility – Information decays naturally or is lost when electrical power is switched off – Example; semiconductor memory • Non-volatile – Once recorded, information remains without deterioration until deliberately changed – No electrical power is needed to retain information – Example: Magnetic-surface memories and semiconductor memory • Erasable • Non-erasable – Cannot be altered, except by destroying the storage unit – Semiconductor memory of this type is known as read-only memory (ROM) • Power consumption Semester II 2014/2015
Principle of Locality •Principle of locality (or locality of reference): • Program accesses a relatively small portion of the address space at any instant of time. • Temporal locality and spatial locality. Address Space 0 2 n - 1 Probability of reference Semester II 2014/2015 16
17.
Principle of Locality(2) Temporal locality (locality in time): Keep most recently accessed data items closer to the processor. Spatial locality (locality in space): Move blocks consisting of contiguous words to ‘upper’ levels. Block is a unit of transfer. Semester II 2014/2015 17
18.
Principle of Locality(3) What programming constructs lead to principle of locality? Spatial Temporal Data arrays loop counters Code no branch/jump loop Semester II 2014/2015 18
19.
The Memory Hierarchy •Design constraints on a computer’s memory can be summed up by three questions: – How much, how fast, how expensive • There is a trade-off among capacity, access time, and cost – Faster access time, greater cost per bit – Greater capacity, smaller cost per bit – Greater capacity, slower access time • The way out of the memory dilemma is not to rely on a single memory component or technology, but to employ a memory hierarchy Semester II 2014/2015 19
20.
Memory Hierarchy –Basic Theoretical Diagram • Registers • L1 Cache • L2 Cache • Main memory • Disk cache • Disk • Optical • Tape Semester II 2014/2015 20
21.
Memory Hierarchy –Simplified “Real Life” Diagram Semester II 2014/2015 21
Memory Hierarchy –Current Control Data path ProcessorRegs Second- ary MemoryL2 Cache L1$ Main Memory Speed(ns): 0.5ns 2ns 6ns 100ns 10,000,000ns Size (MB): 0.0005 0.05 1-4 100-1000 100,000 Cost ($/MB): -- $100 $30 $1 $0.05 Technology: Regs SRAM SRAM DRAM Disk Semester II 2014/2015 23
24.
24 The 3 Parameters •Capacity, speed and cost increase moving from top to down. • First, access time get bigger; e.g.: CPU can be accessed in a few nanoseconds. • Second, the storage capacity increases as we go downwards. • CPU registers – around 128 bytes, caches – a few megabytes, main memory – thousands of megabytes, and magnetic disks – a few gigabytes. • Third, the number of bits per RM spent increases down the hierarchy. Semester II 2014/2015
25.
The Bottom Line •How much? – Capacity • How fast? – Time is money • How expensive? Semester II 2014/2015 25
26.
So you wantfast? • It is possible to build a computer which uses only static RAM (see later) • This would be very fast • This would need no cache – How can you cache? • This would cost a very large amount Semester II 2014/2015 26
27.
27 Cache Memory Principles •Small amount of fast memory • Sits between normal main memory and CPU • May be located on CPU chip or module Semester II 2014/2015
29 Other definitions ofCache 1. Cache memory is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. As the microprocessor processes data, it looks first in the cache memory and if it finds the data there (from a previous reading of data), it does not have to do the more time-consuming reading of data from larger memory. 2. A cache is a small capacity fast memory, too expensive to be used for the whole of RAM. It acts as an intermediate store between the CPU and main memory and is used to improve the overall speed of the computer since it is used primarily to keep the most active portions of the program being used. 3. A special, high-speed memory unit. 4. It is a temporary, very fast, digital storage for a central processor. Semester II 2014/2015
30.
30 Cache operation -overview • CPU requests contents of memory location • Check cache for this data • If present, get from cache (fast) • If not present, read required block from main memory to cache • Then deliver from cache to CPU • Cache includes tags to identify which block of main memory is in each cache slot Semester II 2014/2015
Elements of CacheDesign Table 4.2 Elements of Cache Design Semester II 2014/2015 34
35.
+ 1. Cache Addresses •Virtual memory – Facility that allows programs to address memory from a logical point of view, without regard to the amount of main memory physically available – When used, the address fields of machine instructions contain virtual addresses – For reads to and writes from main memory, a hardware memory management unit (MMU) translates each virtual address into a physical address in main memory Semester II 2014/2015 35
37 2. Cache Size •Cost – More cache is expensive • Speed – More cache is faster – Checking cache for data takes time Semester II 2014/2015
38.
Table 4.3 Cache Sizes ofSome Processors a Two values separated by a slash refer to instruction and data caches. b Both caches are instruction only; no data caches.Semester II 2014/2015 38
39.
3. Mapping Function •Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines • Three techniques can be used: 1. Direct 2. Associative 3. Set Associative Semester II 2014/2015 39
40.
40 3. Mapping Function(Example) • Cache of 64kByte • Cache block of 4 bytes – i.e. cache is 16k (214) lines of 4 bytes • 16MBytes main memory • 24 bit address – (224=16M) Semester II 2014/2015
41.
41 3. Mapping Function:Direct Mapping Direct Mapping • The simplest technique. • Each block of main memory maps to only one possible cache line – i.e. if a block is in cache, it must be in one specific place • Address is in two parts Semester II 2014/2015
42.
42 3. Mapping Function:Direct Mapping Direct Mapping • Least Significant w bits identify unique word • Most Significant s bits specify one memory block • The MSBs (Most Significant Bits) are split into a cache line field r and a tag of s-r (most significant) Semester II 2014/2015
3. Mapping Function:Direct Mapping Summary • Address length = (s + w) bits • Number of addressable units = 2s+w words or bytes • Block size = line size = 2w words or bytes • Number of blocks in main memory = 2s+ w/2w = 2s • Number of lines in cache = m = 2r • Size of tag = (s – r) bits Semester II 2014/2015 46
47.
47 3. Mapping Function:Direct Mapping Pros & Cons • Simple • Inexpensive • Cons: Fixed location for given block – If a program accesses 2 blocks that map to the same line repeatedly, then the blocks will be continually swapped in the cache, caused the cache misses to be very high Semester II 2014/2015
48.
48 3. Mapping Function:Fully Associative Cache • To reduce the conflict misses of direct mapped caches without affecting its fast access time is to remember what was discarded in case it is needed again. • Since the discarded data has already been fetched, it can be used again at a small cost. • Such recycling is possible by using a victim cache, also known as fully associative cache. • Typical size is 4 to 16 cache lines • Residing between direct mapped L1 cache and the next level of memory Semester II 2014/2015
49.
3. Mapping Function:Fully Associative Cache Organization Semester II 2014/2015 49
50.
50 3. Mapping Function:Associative Associative Mapping • Permits each main memory block to be loaded into any line of the cache. • The cache control logic interprets a memory address simply as a Tag and a Word field. • Tag uniquely identifies block of memory. • To determine whether a block is in the cache, the cache control logic must simultaneously examine every line’s Tag for a match . • Cache searching gets expensive. Semester II 2014/2015
3. Mapping Function:Associative Mapping Summary • Address length = (s + w) bits • Number of addressable units = 2s+w words or bytes • Block size = line size = 2w words or bytes • Number of blocks in main memory = 2s+ w/2w = 2s • Number of lines in cache = undetermined • Size of tag = s bits Semester II 2014/2015 53
54.
54 3. Mapping Function:Set-Associative Set-Associative Mapping • Compromise that exhibits the strengths of both the direct and associative approaches while reducing their disadvantages • Cache consists of a number of sets • Each set contains a number of lines • A given block maps to any line in a given set – Block B can be in any line of set i • e.g. 2 lines per set – 2 way associative mapping – A given block can be in one of 2 lines in only one set Semester II 2014/2015
3. Mapping Function:Set Associative Mapping Summary • Address length = (s + w) bits • Number of addressable units = 2s+w words or bytes • Block size = line size = 2w words or bytes • Number of blocks in main memory = 2s+w/2w=2s • Number of lines in set = k • Number of sets = v = 2d • Number of lines in cache = m = kv = k * 2d • Size of cache = k * 2d+w words or bytes • Size of tag = (s – d) bits Semester II 2014/2015 57
3. Mapping Function:Varying Associativity Over Cache Size Semester II 2014/2015 59
60.
4. Replacement Algorithms •Once the cache has been filled, when a new block is brought into the cache, one of the existing blocks must be replaced. • For direct mapping there is only one possible line for any particular block and no choice is possible. • For the associative and set-associative techniques a replacement algorithm is needed. • To achieve high speed, an algorithm must be implemented in hardware. Semester II 2014/2015 60
62 4. Replacement Algorithms: Associative& Set Associative • Hardware implemented algorithm (speed) 1. Least Recently used (LRU) 2. First in first out (FIFO) 3. Least frequently used 4. Random Semester II 2014/2015
63.
4. Replacement Algorithms: LRU Leastrecently used (LRU) – Most effective – Replace that block in the set that has been in the cache longest with no reference to it – Because of its simplicity of implementation, LRU is the most popular replacement algorithm Semester II 2014/2015 63
64.
4. Replacement Algorithms: FIFO First-in-first-out(FIFO) – Replace that block in the set that has been in the cache longest. – Easily implemented as a round-robin or circular buffer technique. Semester II 2014/2015 64
65.
4. Replacement Algorithms: LFU LeastFrequently Used (LFU) – Replace that block in the set that has experienced the fewest references – Could be implemented by associating a counter with each line Semester II 2014/2015 65
66.
When a blockthat is resident in the cache is to be replaced there are two cases to consider: If the old block in the cache has not been altered then it may be overwritten with a new block without first writing out the old block If at least one write operation has been performed on a word in that line of the cache then main memory must be updated by writing the line of cache out to the block of memory before bringing in the new block There are two problems to contend with: More than one device may have access to main memory A more complex problem occurs when multiple processors are attached to the same bus and each processor has its own local cache - if a word is altered in one cache it could conceivably invalidate a word in other caches 5. Write Policy Semester II 2014/2015 66
67.
5. Write Policy Inother words… • Must not overwrite a cache block unless main memory is up to date • Multiple CPUs may have individual caches • I/O may address main memory directly 2 methods of write policy are… Semester II 2014/2015 67
68.
5. Write Policy:Write Through Write Through – Simplest technique – All write operations are made to main memory as well as to the cache – The main disadvantage of this technique is that it generates substantial memory traffic and may create a bottleneck Semester II 2014/2015 68
69.
5. Write Policy:Write Back Write Back – Minimizes memory writes – Updates initially are made only in the cache – Update bit for cache slot is set when update occurs – If block is to be replaced, write to main memory only if update bit is set – Other caches get out of sync – Resulting portions of main memory are invalid and hence accesses by I/O modules can be allowed only through the cache – This makes for complex circuitry and a potential bottleneck Semester II 2014/2015 69
70.
6. Line Size •When a block of data is retrieved and placed in the cache not only the desired word but also some number of adjacent words are retrieved. • As the block size increases the hit ratio will at first increase because of the principle of locality. • As the block size increases more useful data are brought into the cache. • The hit ratio will begin to decrease as the block becomes bigger and the probability of using the newly fetched information becomes less than the probability of reusing the information that has to be replaced. Semester II 2014/2015 70
71.
6. Line Size •Two specific effects come into play: – Larger blocks reduce the number of blocks that fit into a cache – As a block becomes larger each additional word is farther from the requested word Semester II 2014/2015 71
72.
7. Number ofCaches • When caches were originally introduced, the typical system had a single cache. • More recently, the use of multiple caches has become the norm. Semester II 2014/2015 72
73.
7. Number ofCaches: Multilevel Caches • As logic density has increased it has become possible to have a cache on the same chip as the processor • The on-chip cache reduces the processor’s external bus activity and speeds up execution time and increases overall system performance – When the requested instruction or data is found in the on-chip cache, the bus access is eliminated – On-chip cache accesses will complete appreciably faster than would even zero-wait state bus cycles – During this period the bus is free to support other transfers Semester II 2014/2015 73
74.
7. Number ofCaches: Multilevel Caches • Two-level cache: – Internal cache designated as level 1 (L1) – External cache designated as level 2 (L2) • Potential savings due to the use of an L2 cache depends on the hit rates in both the L1 and L2 caches • The use of multilevel caches complicates all of the design issues related to caches, including size, replacement algorithm, and write policy Semester II 2014/2015 74
75.
7. Number ofCaches: Unified Versus Split Caches • Has become common to split cache: – One dedicated to instructions – One dedicated to data – Both exist at the same level, typically as two L1 caches • Advantages of unified cache: – Higher hit rate • Balances load of instruction and data fetches automatically • Only one cache needs to be designed and implemented Semester II 2014/2015 75
76.
7. Number ofCaches: Unified Versus Split Caches • Trend is toward split caches at the L1 and unified caches for higher levels • Advantages of split cache: – Eliminates cache contention between instruction fetch/decode unit and execution unit • Important in pipelining Semester II 2014/2015 76
77.
Pentium 4 CacheOrganization • The evolution of cache organization is seen clearly in the evolution of Intel microprocessors (Table 4.4) Semester II 2014/2015 77