One of the most exciting feature of Java 16 is vector API (JEP 338) that makes it possible to take advantage of available SIMD instructions and by doing so significantly improve performance.
When reading an example from JEP documentation I was somewhat shocked to see that a simple scalar computation
void scalarComputation(float[] a, float[] b, float[] c) { for (int i = 0; i < a.length; i++) { c[i] = (a[i] * a[i] + b[i] * b[i]) * -1.0f; } }
has to be rewritten as a hardly readable
static final VectorSpecies<Float> SPECIES = FloatVector.SPECIES_PREFERRED; void vectorComputation(float[] a, float[] b, float[] c, VectorSpecies<Float> species) { int i = 0; int upperBound = species.loopBound(a.length); for (; i < upperBound; i += species.length()) { //FloatVector va, vb, vc; var va = FloatVector.fromArray(species, a, i); var vb = FloatVector.fromArray(species, b, i); var vc = va.mul(va). add(vb.mul(vb)). neg(); vc.intoArray(c, i); } for (; i < a.length; i++) { c[i] = (a[i] * a[i] + b[i] * b[i]) * -1.0f; } } vectorComputation(a, b, c, SPECIES);
to get the desired vectorized assembly.
0.43% / │ 0x0000000113d43890: vmovdqu 0x10(%r8,%rbx,4),%ymm0 7.38% │ │ 0x0000000113d43897: vmovdqu 0x10(%r10,%rbx,4),%ymm1 8.70% │ │ 0x0000000113d4389e: vmulps %ymm0,%ymm0,%ymm0 5.60% │ │ 0x0000000113d438a2: vmulps %ymm1,%ymm1,%ymm1 13.16% │ │ 0x0000000113d438a6: vaddps %ymm0,%ymm1,%ymm0 21.86% │ │ 0x0000000113d438aa: vxorps -0x7ad76b2(%rip),%ymm0,%ymm0 7.66% │ │ 0x0000000113d438b2: vmovdqu %ymm0,0x10(%r9,%rbx,4) 26.20% │ │ 0x0000000113d438b9: add $0x8,%ebx 6.44% │ │ 0x0000000113d438bc: cmp %r11d,%ebx \ │ 0x0000000113d438bf: jl 0x0000000113d43890
"Phew, great that I don't use Java", thought I and went on to see what would Go do in such case. To my big disappointment, Go does not seem to support SIMD intrinsics and generates non-vectorized assembly :(
Convinced that Clang would not disappoint me I checked the assembly using compiler explorer with highest level of optimization and noticed that even though it does a lot of useful optimizations, including loop unrolling, it's still using only 128bit XMM registers:
... .LBB0_6: # =>This Inner Loop Header: Depth=1 movups xmm1, xmmword ptr [rsi + 4*rax] movups xmm2, xmmword ptr [rsi + 4*rax + 16] mulps xmm1, xmm1 mulps xmm2, xmm2 movups xmm3, xmmword ptr [rdx + 4*rax] movups xmm4, xmmword ptr [rdx + 4*rax + 16] mulps xmm3, xmm3 addps xmm3, xmm1 mulps xmm4, xmm4 addps xmm4, xmm2 xorps xmm3, xmm0 xorps xmm4, xmm0 movups xmmword ptr [rcx + 4*rax], xmm3 movups xmmword ptr [rcx + 4*rax + 16], xmm4 add rax, 8 cmp rdi, rax jne .LBB0_6 cmp rdi, r8 je .LBB0_13 ...
but easily switches to 512bit ZMM registers when foundation AVX 512 support is requested via -mavx512f
flag:
... .LBB0_8: # =>This Inner Loop Header: Depth=1 vmovups zmm1, zmmword ptr [rsi + 4*rdi] vmovups zmm2, zmmword ptr [rsi + 4*rdi + 64] vmovups zmm3, zmmword ptr [rsi + 4*rdi + 128] vmovups zmm4, zmmword ptr [rsi + 4*rdi + 192] vmulps zmm1, zmm1, zmm1 vmulps zmm2, zmm2, zmm2 vmulps zmm3, zmm3, zmm3 vmulps zmm4, zmm4, zmm4 vmovups zmm5, zmmword ptr [rdx + 4*rdi] vmovups zmm6, zmmword ptr [rdx + 4*rdi + 64] vmovups zmm7, zmmword ptr [rdx + 4*rdi + 128] vmovups zmm8, zmmword ptr [rdx + 4*rdi + 192] vmulps zmm5, zmm5, zmm5 vaddps zmm1, zmm1, zmm5 vmulps zmm5, zmm6, zmm6 vaddps zmm2, zmm2, zmm5 vmulps zmm5, zmm7, zmm7 vaddps zmm3, zmm3, zmm5 vmulps zmm5, zmm8, zmm8 vaddps zmm4, zmm4, zmm5 vpxord zmm1, zmm1, zmm0 vpxord zmm2, zmm2, zmm0 vpxord zmm3, zmm3, zmm0 vpxord zmm4, zmm4, zmm0 vmovdqu64 zmmword ptr [rcx + 4*rdi], zmm1 vmovdqu64 zmmword ptr [rcx + 4*rdi + 64], zmm2 vmovdqu64 zmmword ptr [rcx + 4*rdi + 128], zmm3 vmovdqu64 zmmword ptr [rcx + 4*rdi + 192], zmm4 add rdi, 64 cmp rax, rdi jne .LBB0_8 cmp rax, r8 je .LBB0_19 test r8b, 56 je .LBB0_14 ...
AVX 512 support was added by Intel with the Haswell processor, which shipped in 2013, so it's very likely that in 2021 your servers have it.
Moral of the story?
Don't leave performance on the table - know your hardware and how to take full advantage of it.
Top comments (0)