Skip to content

Commit 0f39966

Browse files
committed
1 parent 23b6b27 commit 0f39966

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

book/data-locality.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,7 @@ It all boils down to something pretty simple: whenever the chip reads some memor
139139

140140
<aside name="line">
141141

142-
There's a key assumption here, though: one thread. If you are accessing nearby data on multiple threads, it's faster to have it on *different* cache lines. If two threads try to use data on the same cache line, both cores have to do some costly synchronization of their caches.
142+
There's a key assumption here, though: one thread. If you are modifying nearby data on multiple threads, it's faster to have it on *different* cache lines. If two threads try to tweak data on the same cache line, both cores have to do some costly synchronization of their caches.
143143

144144
</aside>
145145

html/data-locality.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ <h3><a href="#wait,-data-is-performance" name="wait,-data-is-performance">Wait,
164164
<p>It all boils down to something pretty simple: whenever the chip reads some memory, it gets a whole cache line. The more you can use stuff in that <span name="line">cache line, the faster you go</span>. So the goal then is to <em>organize your data structures so that the things you&#x2019;re processing are next to each other in memory</em>.</p>
165165
<aside name="line">
166166

167-
<p>There&#x2019;s a key assumption here, though: one thread. If you are accessing nearby data on multiple threads, it&#x2019;s faster to have it on <em>different</em> cache lines. If two threads try to use data on the same cache line, both cores have to do some costly synchronization of their caches.</p>
167+
<p>There&#x2019;s a key assumption here, though: one thread. If you are modifying nearby data on multiple threads, it&#x2019;s faster to have it on <em>different</em> cache lines. If two threads try to tweak data on the same cache line, both cores have to do some costly synchronization of their caches.</p>
168168
</aside>
169169

170170
<p>In other words, if your code is crunching on <code>Thing</code> then <code>Another</code> then <code>Also</code>, you want them laid out in memory like this:</p>

0 commit comments

Comments
 (0)