Transduction benchmarks

01 transform: increment integer
03 transforms: Transduce with `map`+`filter`+`remove`, 1X
06 transforms: Transduce with `map`+`filter`+`remove`, 2X
12 transforms: Transduce with `map`+`filter`+`remove`, 4X

How do Brokvolli's transduce variants perform compared to Clojure's reduce, et al.?

Observations: The single-threaded variants perform similarly, while the multi-threaded variants are faster for large vectors and heavier computations.

See also:

Two of Brokvolli's goals are

  1. Bring transduce's performance benefits to reduce-kv-style processing.
  2. Extend fold's multi-threading performance benefits to both transduce and transduce-kv.
So let's run some benchmarks to see if we can objectively measure any performance improvements.

We'll define our benchmarks here. For each function, we'll test vectors increasing in length from one element to one‑million elements, by powers of ten. We'll arrange four different tests:

  1. Increment each element, a pre-generated, random floating point number.
  2. Increment, filter, and remove, 1× (three total operations).
  3. Increment, filter, and remove, 2× (six total operations).
  4. Increment, filter, and remove, 4× (twelve total operations).
For each test, we'll use the Criterium benchmarking library to measure the execution times of sixty repetitions of each condition. Benchmarks were run on three explicitly-pinned cores of my rusty desktop computer.

The functions under examination are as follows:

Overall, we observe that the execution times increase with increasing vector lengths. The results are indistinguishable when the vector contains one‑hundred or fewer elements. When the vectors grow longer and the processing stack is small (i.e., one or three transforms), the single-threaded functions tend to perform slightly faster. However, when the vector lengths approach one‑million elements and the processing stack involves more transforms, the multi-threaded variants (fold, multi/transduce, and multi/transduce-kv), offer improvements that scale roughly with the number of processors.

Brokvolli's multi-threaded transduce and transduce-kv may offer performance benefits over their single-threaded counterparts for large collection sizes and deep transformer stacks. For smaller collections or shallow transformer stacks, the single-threaded variants perform better, and present a simpler interface.

01 transform: increment integer

This test performs one operation per element: incrementing the number. Since we're only doing one operation, we can straightforwardly compare to reduce and reduce-kv as well.

Execution times increase with vector length, with the single-threaded functions performing best for all vector lengths. With only one operation per element, the overhead of multi-threading machinery results in slower times.

The lines aren't perfectly straight because Clojure requires some non-zero time to reduce over an empty or nearly-empty vector. On this machine, that non-zero time is about 120 microseconds.

(fn [n] ((tactics-1 (project-version-lein)) (vecs n)))

Benchmark measurements for expression `(fn [n] ((tactics-1 (project-version-lein)) (vecs n)))`, time versus 'n' arguments, comparing different versions.

03 transforms: Transduce with `map`+`filter`+`remove`, 1X

This test involves three operations per element: incrementing the number, filtering (retaining) if the result is less than a specified cutoff, then removing if the result is greater than a different specified cutoff.

With a transform stack of this size, all functions performed roughly similarly.

(fn [n] ((tactics-2 (project-version-lein)) (vecs n)))

Benchmark measurements for expression `(fn [n] ((tactics-2 (project-version-lein)) (vecs n)))`, time versus 'n' arguments, comparing different versions.

06 transforms: Transduce with `map`+`filter`+`remove`, 2X

This test is similar to the previous, except the three operations were performed twice (with different filter/remove cutoffs), for a total of six operations per element. With this deeper transform stack, the multi-threaded variants begin to offer faster performance (i.e., lower time) on vectors longer than one thousand elements.

(fn [n] ((tactics-3 (project-version-lein)) (vecs n)))

Benchmark measurements for expression `(fn [n] ((tactics-3 (project-version-lein)) (vecs n)))`, time versus 'n' arguments, comparing different versions.

12 transforms: Transduce with `map`+`filter`+`remove`, 4X

Analogous to the previous examples, this test performs twelve operations per element (four cycles of increment, filter, and remove). The multi-threaded variants (fold, multi/transduce, and multi/transduce-kv), demonstrate noticeably improved performance (i.e., lower time) for vectors containing more than one thousand elements.

Note the log-log scale, which may visually obscure the performance deltas. Click 'Show details' to see a numeric chart of the measurements. For example, the multi-threaded variants handle the one-million element vectors about twice as fast as the single-threaded variants.

(fn [n] ((tactics-4 (project-version-lein)) (vecs n)))

Benchmark measurements for expression `(fn [n] ((tactics-4 (project-version-lein)) (vecs n)))`, time versus 'n' arguments, comparing different versions.