Skip to content
This repository was archived by the owner on Dec 3, 2022. It is now read-only.

Conversation

@jridgewell
Copy link
Contributor

@jridgewell jridgewell commented Jul 3, 2019

This gives ~10x speed up. This makes sense because you're creating a duplicate array (the typed array) which has to allocate, iterate the segment, and clamp that value into the typed array. For a 4-pack segment, that's an extra 32bytes of memory allocation (plus the typed array's overhead).

Browser: https://jsbench.github.io/#1c14a94a45bf9675ce4612026187c256

Or, if you want to test on multiple node versions, try copying https://gist.github.com/jridgewell/0e89c5ea9428071cc4f9486c9ca82761 into the root directory.

$ npx node@12 benchmark.js v12.5.0 decode (current) x 2,863 ops/sec ±5.96% (62 runs sampled) decode (proposal) x 8,514 ops/sec ±4.78% (65 runs sampled) Fastest is decode (proposal) $ npx node@10 benchmark.js v10.16.0 decode (current) x 706 ops/sec ±8.63% (62 runs sampled) decode (proposal) x 8,345 ops/sec ±6.41% (53 runs sampled) Fastest is decode (proposal) $ npx node@8 benchmark.js v8.16.0 decode (current) x 2,440 ops/sec ±7.05% (64 runs sampled) decode (proposal) x 7,037 ops/sec ±3.70% (55 runs sampled) Fastest is decode (proposal) $ npx node@6 benchmark.js v6.17.1 decode (current) x 588 ops/sec ±8.61% (51 runs sampled) decode (proposal) x 5,848 ops/sec ±7.55% (56 runs sampled) Fastest is decode (proposal)
This gives ~10x speed up. This makes sense because you're creating a dulpicate array (the typed array) which has to allocate, iterate the segment, and translate clamp that value into the typed array. For a 4-pack segement, that's an extra 32bytes of memory allocation (plus the typed array's overhead). Browser: https://jsbench.github.io/#1c14a94a45bf9675ce4612026187c256 Or, if you want to test on multilpe node versions, try copying https://gist.github.com/jridgewell/0e89c5ea9428071cc4f9486c9ca82761 into the root directory. ```bash $ npx node@12 benchmark.js v12.5.0 decode (current) x 2,863 ops/sec ±5.96% (62 runs sampled) decode (proposal) x 8,514 ops/sec ±4.78% (65 runs sampled) Fastest is decode (proposal) $ npx node@10 benchmark.js v10.16.0 decode (current) x 706 ops/sec ±8.63% (62 runs sampled) decode (proposal) x 8,345 ops/sec ±6.41% (53 runs sampled) Fastest is decode (proposal) $ npx node@8 benchmark.js v8.16.0 decode (current) x 2,440 ops/sec ±7.05% (64 runs sampled) decode (proposal) x 7,037 ops/sec ±3.70% (55 runs sampled) Fastest is decode (proposal) $ npx node@6 benchmark.js v6.17.1 decode (current) x 588 ops/sec ±8.61% (51 runs sampled) decode (proposal) x 5,848 ops/sec ±7.55% (56 runs sampled) Fastest is decode (proposal) ```
@Rich-Harris
Copy link
Owner

Huh! This makes sense, and yet... the reason they're typed arrays in the first place was because that apparently gave a meaningful speed bump: #74 (they were later changed from Int8Array to Int32Array because of predictable overflow issues)

Any ideas what could explain the original result?

@jridgewell
Copy link
Contributor Author

jridgewell commented Jul 4, 2019

Without seeing the benchmark setup, I have no idea.

@Rich-Harris Rich-Harris merged commit 312d481 into Rich-Harris:master Jul 4, 2019
@jridgewell jridgewell deleted the regular-arrays branch July 4, 2019 00:24
@Rich-Harris
Copy link
Owner

Ha, fair enough. Can't fault the logic. Will cut a release as soon as the Amtrak Wifi lets me. Thanks!

@jridgewell jridgewell mentioned this pull request Jul 9, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

2 participants