@@ -27,24 +27,24 @@ either [`wrk`][wrk] or [`autocannon`][autocannon].
2727
2828` Autocannon ` is a Node.js script that can be installed using
2929` npm install -g autocannon ` . It will use the Node.js executable that is in the
30- path. Hence if you want to compare two HTTP benchmark runs, make sure that the
30+ path. In order to compare two HTTP benchmark runs, make sure that the
3131Node.js version in the path is not altered.
3232
33- ` wrk ` may be available through your preferred package manager . If not, you can
34- easily build it [ from source] [ wrk ] via ` make ` .
33+ ` wrk ` may be available through one of the available package managers . If not, it can
34+ be easily built [ from source] [ wrk ] via ` make ` .
3535
3636By default, ` wrk ` will be used as the benchmarker. If it is not available,
37- ` autocannon ` will be used in its place. When creating an HTTP benchmark, you can
38- specify which benchmarker should be used by providing it as an argument:
37+ ` autocannon ` will be used in its place. When creating an HTTP benchmark, the
38+ benchmarker to be used should be specified by providing it as an argument:
3939
4040` node benchmark/run.js --set benchmarker=autocannon http `
4141
4242` node benchmark/http/simple.js benchmarker=autocannon `
4343
4444### Benchmark Analysis Requirements
4545
46- To analyze the results, ` R ` should be installed. Use your package manager or
47- download it from https://www.r-project.org/ .
46+ To analyze the results, ` R ` should be installed. Use one of the available
47+ package managers or download it from https://www.r-project.org/ .
4848
4949The R packages ` ggplot2 ` and ` plyr ` are also used and can be installed using
5050the R REPL.
@@ -55,8 +55,8 @@ install.packages("ggplot2")
5555install.packages(" plyr" )
5656```
5757
58- In the event you get a message that you need to select a CRAN mirror first, you
59- can specify a mirror by adding in the repo parameter.
58+ In the event that a message is reported stating that a CRAN mirror must be
59+ selected first, specify a mirror by adding in the repo parameter.
6060
6161If we used the "http://cran.us.r-project.org " mirror, it could look something
6262like this:
@@ -65,7 +65,7 @@ like this:
6565install.packages(" ggplot2" , repo = " http://cran.us.r-project.org" )
6666```
6767
68- Of course, use the mirror that suits your location.
68+ Of course, use an appropriate mirror based on location.
6969A list of mirrors is [ located here] ( https://cran.r-project.org/mirrors.html ) .
7070
7171## Running benchmarks
@@ -98,7 +98,7 @@ process. This ensures that benchmark results aren't affected by the execution
9898order due to v8 optimizations. ** The last number is the rate of operations
9999measured in ops/sec (higher is better).**
100100
101- Furthermore you can specify a subset of the configurations, by setting them in
101+ Furthermore a subset of the configurations can be specified , by setting them in
102102the process arguments:
103103
104104``` console
@@ -179,9 +179,9 @@ In the output, _improvement_ is the relative improvement of the new version,
179179hopefully this is positive. _ confidence_ tells if there is enough
180180statistical evidence to validate the _ improvement_ . If there is enough evidence
181181then there will be at least one star (` * ` ), more stars is just better. ** However
182- if there are no stars, then you shouldn 't make any conclusions based on the
183- _ improvement_ .** Sometimes this is fine, for example if you are expecting there
184- to be no improvements , then there shouldn't be any stars.
182+ if there are no stars, then don 't make any conclusions based on the
183+ _ improvement_ .** Sometimes this is fine, for example if no improvements are
184+ expected , then there shouldn't be any stars.
185185
186186** A word of caution:** Statistics is not a foolproof tool. If a benchmark shows
187187a statistical significant difference, there is a 5% risk that this
@@ -198,9 +198,9 @@ same for both versions. The confidence field will show a star if the p-value
198198is less than ` 0.05 ` ._
199199
200200The ` compare.R ` tool can also produce a box plot by using the ` --plot filename `
201- option. In this case there are 48 different benchmark combinations, thus you
202- may want to filter the csv file. This can be done while benchmarking using the
203- ` --set ` parameter (e.g. ` --set encoding=ascii ` ) or by filtering results
201+ option. In this case there are 48 different benchmark combinations, and there
202+ may be a need to filter the csv file. This can be done while benchmarking
203+ using the ` --set ` parameter (e.g. ` --set encoding=ascii ` ) or by filtering results
204204afterwards using tools such as ` sed ` or ` grep ` . In the ` sed ` case be sure to
205205keep the first line since that contains the header information.
206206
@@ -295,7 +295,7 @@ chunk encoding mean confidence.interval
295295### Basics of a benchmark
296296
297297All benchmarks use the ` require('../common.js') ` module. This contains the
298- ` createBenchmark(main, configs[, options]) ` method which will setup your
298+ ` createBenchmark(main, configs[, options]) ` method which will setup the
299299benchmark.
300300
301301The arguments of ` createBenchmark ` are:
@@ -312,20 +312,20 @@ The arguments of `createBenchmark` are:
312312` createBenchmark ` returns a ` bench ` object, which is used for timing
313313the runtime of the benchmark. Run ` bench.start() ` after the initialization
314314and ` bench.end(n) ` when the benchmark is done. ` n ` is the number of operations
315- you performed in the benchmark.
315+ performed in the benchmark.
316316
317317The benchmark script will be run twice:
318318
319319The first pass will configure the benchmark with the combination of
320320parameters specified in ` configs ` , and WILL NOT run the ` main ` function.
321321In this pass, no flags except the ones directly passed via commands
322- that you run the benchmarks with will be used.
322+ when running the benchmarks will be used.
323323
324324In the second pass, the ` main ` function will be run, and the process
325325will be launched with:
326326
327- * The flags you've passed into ` createBenchmark ` (the third argument)
328- * The flags in the command that you run this benchmark with
327+ * The flags passed into ` createBenchmark ` (the third argument)
328+ * The flags in the command passed when the benchmark was run
329329
330330Beware that any code outside the ` main ` function will be run twice
331331in different processes. This could be troublesome if the code
@@ -346,7 +346,7 @@ const configs = {
346346};
347347
348348const options = {
349- // Add --expose-internals if you want to require internal modules in main
349+ // Add --expose-internals in order to require internal modules in main
350350 flags: [' --zero-fill-buffers' ]
351351};
352352
@@ -357,9 +357,9 @@ const bench = common.createBenchmark(main, configs, options);
357357// in different processes, with different command line arguments.
358358
359359function main (conf ) {
360- // You will only get the flags that you have passed to createBenchmark
361- // earlier when main is run. If you want to benchmark the internal modules,
362- // require them here. For example:
360+ // Only flags that have been passed to createBenchmark
361+ // earlier when main is run will be in effect.
362+ // In order to benchmark the internal modules, require them here. For example:
363363 // const URL = require('internal/url').URL
364364
365365 // Start the timer
0 commit comments