You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-46Lines changed: 2 additions & 46 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,53 +2,9 @@
2
2
3
3
High level functions for doing parallel programming with Rcpp. For example, the `parallelFor` function can be used to convert the work of a standard serial "for" loop into a parallel one and the `parallelReduce` function can be used for accumulating aggregate or other values.
4
4
5
-
The high level interface enables safe and robust parallel programming without direct manipulation of operating system threads. The underlying implementation differs by platform: on Linux and Mac systems the [Intel TBB](https://www.threadingbuildingblocks.org/) (Threading Building Blocks) are used while on Windows systems the [TinyThread](http://tinythreadpp.bitsnbites.eu/) library is used.
5
+
The high level interface enables safe and robust parallel programming without direct manipulation of operating system threads. On Windows, OS X, and Linux systems the underlying implementation is based on [Intel TBB](https://www.threadingbuildingblocks.org/) (Threading Building Blocks). On other platforms a less-performant fallback implementation based on the [TinyThread](http://tinythreadpp.bitsnbites.eu/) library is used.
6
6
7
-
### Examples
8
-
9
-
Here are links to some examples that illustrate using RcppParallel. Performance benchmarks were executed on a 2.6GHz Haswell MacBook Pro with 4 cores (8 with hyperthreading).
10
-
11
-
[Parallel Matrix Transform](http://gallery.rcpp.org/articles/parallel-matrix-transform/) --- Demonstrates using `parallelFor` to transform a matrix (take the square root of each element) in parallel. In this example the parallel version performs about 2.5x faster than the serial version.
12
-
13
-
[Parallel Vector Sum](http://gallery.rcpp.org/articles/parallel-vector-sum/) --- Demonstrates using `parallelReduce` to take the sum of a vector in parallel. In this example the parallel version performs 4.5x faster than the serial version.
14
-
15
-
[Parallel Distance Matrix](http://gallery.rcpp.org/articles/parallel-distance-matrix/) --- Demonstrates using `parallelFor` to compute pairwise distances for each row in an input data matrix. In this example the parallel version performs 5.5x faster than the serial version.
16
-
17
-
[Parallel Inner Product](http://gallery.rcpp.org/articles/parallel-inner-product/) --- Demonstrates using `parallelReduce` to compute the inner product of two vectors in parallel. In this example the parallel version performs 2.5x faster than the serial version.
18
-
19
-
Note that the benchmark times above are for the TBB back-end (Posix systems only). Performance on Windows will be about 30-50% slower as a result of less sophisticated thread scheduling.
20
-
21
-
### Usage
22
-
23
-
You can install the RcppParallel package from CRAN as follows:
24
-
25
-
```s
26
-
install.packages("RcppParallel")
27
-
```
28
-
29
-
#### sourceCpp
30
-
31
-
You can use the RcppParallel library from within a standalone C++ source file as follows:
32
-
33
-
```cpp
34
-
// [[Rcpp::depends(RcppParallel)]]
35
-
#include<RcppParallel.h>
36
-
```
37
-
38
-
#### Packages
39
-
40
-
If you want to use RcppParallel from within an R package you add the following to your DESCRIPTION file:
41
-
42
-
```yaml
43
-
Imports: RcppParallel
44
-
LinkingTo: RcppParallel
45
-
```
46
-
47
-
And the following to your NAMESPACE file:
48
-
49
-
```s
50
-
import(RcppParallel)
51
-
```
7
+
For additional documentation on using RcppParallel see the package website at http://rcppcore.github.io/RcppParallel/.
0 commit comments