You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[CppCon 2017: Fedor Pikus, C++ atomics, from basic to advanced. What do they really do?](https://www.youtube.com/watch?v=ZQFzMfHIxng): How atomics work, atomic (lock free) vs mutex-based, dealing with issues.
@@ -237,7 +240,6 @@ r vectorized (sequentially, but with instructions that operate on multiple items
237
240
-`compare_exchange_weak()`: Does not guarantees operation to be successful, even if they expected is equal to the current value. It may happen when the machine does not have the *compare_exchange* instruction (like in RISC processors used in ARM). However, this is usually faster to execute and the operation could be repeated.
238
241
-`compare_exchange_strong()`: Guarantees to be atomic and successful, but it may use more than one instruction.
239
242
- Compare-and-exchange operations are often used as basic building blocks of lockfree algorithms and data structures. These are based on while statements like: `std::atomic<int> x{0}; int x0 = x; while(!x.compare_exchange_strong(x0, x0+1)) {}`, `x0` will have a proper value after completion, even on multithreading apps. Notice how the CAS allows lock-free multithreading.
240
-
- TODO: WHY IS THIS FUNCTION SO IMPORTANT?
241
243
242
244
**Relevant Atomic Types**:
243
245
-[atomic_flag](https://en.cppreference.com/w/cpp/atomic/atomic_flag): Atomic boolean type guaranteed to be lock free. All other types provide `is_lock_free()` (value depends on the OS). The API is limited compared to `std::atomic<bool>`.
@@ -305,10 +307,10 @@ r vectorized (sequentially, but with instructions that operate on multiple items
305
307
- Lock-Free: Some thread makes progress with every step.
306
308
- Wait-Free: Every thread makes progress regardless of the others.
307
309
308
-
**Memory Reclaim Mechanisms**: TODO. NEEDS CHECKING AND REWRITE.
-**Thread Counting**: If the thread count is more than 1, then the critical operation is added to a list of pending critical operations (e.g, list of to be deleted objects). When the thread count becomes 1, then the last thread takes care of the pending operations.
310
312
-**Hazard Pointers**: Maintain a list of *hazard pointers*. When a thread needs to access a node that is going to be deleted by another thread, it must set the hazard pointer for that node in the list. The hazard pointer list contains the thread id and the pointer address. When a thread wants to delete any node, it must check the list beforehand. If the address is listed, then another thread also wants to delete the node. In that case, the second thread does not delete it, but stores it into the pending list.
311
-
-**Reference Counting**: There are many implementation. In this one, there is an external and one internal counter. The external counter is increased for every time the pointer is read. When the reader has completed the use of the resource, it decreases the internal count. The sum of these values is the total number of references to the resource.
313
+
-**Reference Counting**: There are many implementations. In this one, there is an external and one internal counter. The external counter is increased for every time the pointer is read. When the reader has completed the use of the resource, it decreases the internal count. The sum of these values is the total number of references to the resource.
@@ -322,24 +324,3 @@ Normal threads have to be manually managed regarding their lifecycle and the ass
322
324
- When the number of processors increases, there is increasing *contention* on the queue. In this scenario, cache ping-pong can be a great time sink. A way to avoid ping-pong is to use a separate work queue for each thread. Each thread then takes work from the global work queue when its own local queue is empty.
323
325
- Work Stealing: Waiting threads can be implemented to steal work from threads with full queues. This can be handled by a specialized *work stealing queue*, which allows to steal workload from the back.
0 commit comments