Skip to content

Conversation

@RKSimon
Copy link
Collaborator

@RKSimon RKSimon commented Sep 25, 2025

Since #159321 we now get actual warnings when we're missing coverage

Since llvm#159321 we now get actual warnings when we're missing coverage
@RKSimon RKSimon enabled auto-merge (squash) September 25, 2025 08:44
@llvmbot
Copy link
Member

llvmbot commented Sep 25, 2025

@llvm/pr-subscribers-backend-x86

Author: Simon Pilgrim (RKSimon)

Changes

Since #159321 we now get actual warnings when we're missing coverage


Patch is 114.07 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/160662.diff

3 Files Affected:

  • (modified) llvm/test/CodeGen/X86/masked_store_trunc_ssat.ll (+1140-70)
  • (modified) llvm/test/CodeGen/X86/masked_store_trunc_usat.ll (+1143-68)
  • (modified) llvm/test/CodeGen/X86/vector-trunc-usat.ll (+1-1)
diff --git a/llvm/test/CodeGen/X86/masked_store_trunc_ssat.ll b/llvm/test/CodeGen/X86/masked_store_trunc_ssat.ll index c950ce64e8883..18d394e1281b4 100644 --- a/llvm/test/CodeGen/X86/masked_store_trunc_ssat.ll +++ b/llvm/test/CodeGen/X86/masked_store_trunc_ssat.ll @@ -1,11 +1,11 @@ ; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py -; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=sse2 | FileCheck %s --check-prefix=SSE2 -; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=sse4.2 | FileCheck %s --check-prefix=SSE4 +; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=sse2 | FileCheck %s --check-prefixes=SSE2 +; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=sse4.2 | FileCheck %s --check-prefixes=SSE4 ; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=avx | FileCheck %s --check-prefixes=AVX,AVX1 ; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=avx2 | FileCheck %s --check-prefixes=AVX,AVX2 -; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=avx512f | FileCheck %s --check-prefix=AVX512F -; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=avx512vl | FileCheck %s --check-prefix=AVX512VL -; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=avx512bw | FileCheck %s --check-prefix=AVX512BW +; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=avx512f | FileCheck %s --check-prefixes=AVX512,AVX512F +; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=avx512vl | FileCheck %s --check-prefixes=AVX512VL,AVX512FVL +; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=avx512bw | FileCheck %s --check-prefixes=AVX512,AVX512BW ; RUN: llc < %s -mtriple=x86_64-unknown-unknown -mattr=avx512vl,avx512bw | FileCheck %s --check-prefixes=AVX512VL,AVX512BWVL define void @truncstore_v8i64_v8i32(<8 x i64> %x, ptr %p, <8 x i32> %mask) { @@ -340,15 +340,15 @@ define void @truncstore_v8i64_v8i32(<8 x i64> %x, ptr %p, <8 x i32> %mask) { ; AVX2-NEXT: vzeroupper ; AVX2-NEXT: retq ; -; AVX512F-LABEL: truncstore_v8i64_v8i32: -; AVX512F: # %bb.0: -; AVX512F-NEXT: # kill: def $ymm1 killed $ymm1 def $zmm1 -; AVX512F-NEXT: vptestmd %zmm1, %zmm1, %k1 -; AVX512F-NEXT: vpminsq {{\.?LCPI[0-9]+_[0-9]+}}(%rip){1to8}, %zmm0, %zmm0 -; AVX512F-NEXT: vpmaxsq {{\.?LCPI[0-9]+_[0-9]+}}(%rip){1to8}, %zmm0, %zmm0 -; AVX512F-NEXT: vpmovqd %zmm0, (%rdi) {%k1} -; AVX512F-NEXT: vzeroupper -; AVX512F-NEXT: retq +; AVX512-LABEL: truncstore_v8i64_v8i32: +; AVX512: # %bb.0: +; AVX512-NEXT: # kill: def $ymm1 killed $ymm1 def $zmm1 +; AVX512-NEXT: vptestmd %zmm1, %zmm1, %k1 +; AVX512-NEXT: vpminsq {{\.?LCPI[0-9]+_[0-9]+}}(%rip){1to8}, %zmm0, %zmm0 +; AVX512-NEXT: vpmaxsq {{\.?LCPI[0-9]+_[0-9]+}}(%rip){1to8}, %zmm0, %zmm0 +; AVX512-NEXT: vpmovqd %zmm0, (%rdi) {%k1} +; AVX512-NEXT: vzeroupper +; AVX512-NEXT: retq ; ; AVX512VL-LABEL: truncstore_v8i64_v8i32: ; AVX512VL: # %bb.0: @@ -358,16 +358,6 @@ define void @truncstore_v8i64_v8i32(<8 x i64> %x, ptr %p, <8 x i32> %mask) { ; AVX512VL-NEXT: vpmovqd %zmm0, (%rdi) {%k1} ; AVX512VL-NEXT: vzeroupper ; AVX512VL-NEXT: retq -; -; AVX512BW-LABEL: truncstore_v8i64_v8i32: -; AVX512BW: # %bb.0: -; AVX512BW-NEXT: # kill: def $ymm1 killed $ymm1 def $zmm1 -; AVX512BW-NEXT: vptestmd %zmm1, %zmm1, %k1 -; AVX512BW-NEXT: vpminsq {{\.?LCPI[0-9]+_[0-9]+}}(%rip){1to8}, %zmm0, %zmm0 -; AVX512BW-NEXT: vpmaxsq {{\.?LCPI[0-9]+_[0-9]+}}(%rip){1to8}, %zmm0, %zmm0 -; AVX512BW-NEXT: vpmovqd %zmm0, (%rdi) {%k1} -; AVX512BW-NEXT: vzeroupper -; AVX512BW-NEXT: retq %a = icmp ne <8 x i32> %mask, zeroinitializer %b = icmp slt <8 x i64> %x, <i64 2147483647, i64 2147483647, i64 2147483647, i64 2147483647, i64 2147483647, i64 2147483647, i64 2147483647, i64 2147483647> %c = select <8 x i1> %b, <8 x i64> %x, <8 x i64> <i64 2147483647, i64 2147483647, i64 2147483647, i64 2147483647, i64 2147483647, i64 2147483647, i64 2147483647, i64 2147483647> @@ -897,6 +887,70 @@ define void @truncstore_v8i64_v8i16(<8 x i64> %x, ptr %p, <8 x i32> %mask) { ; AVX512F-NEXT: vzeroupper ; AVX512F-NEXT: retq ; +; AVX512FVL-LABEL: truncstore_v8i64_v8i16: +; AVX512FVL: # %bb.0: +; AVX512FVL-NEXT: vptestmd %ymm1, %ymm1, %k0 +; AVX512FVL-NEXT: vpmovsqw %zmm0, %xmm0 +; AVX512FVL-NEXT: kmovw %k0, %eax +; AVX512FVL-NEXT: testb $1, %al +; AVX512FVL-NEXT: jne .LBB1_1 +; AVX512FVL-NEXT: # %bb.2: # %else +; AVX512FVL-NEXT: testb $2, %al +; AVX512FVL-NEXT: jne .LBB1_3 +; AVX512FVL-NEXT: .LBB1_4: # %else2 +; AVX512FVL-NEXT: testb $4, %al +; AVX512FVL-NEXT: jne .LBB1_5 +; AVX512FVL-NEXT: .LBB1_6: # %else4 +; AVX512FVL-NEXT: testb $8, %al +; AVX512FVL-NEXT: jne .LBB1_7 +; AVX512FVL-NEXT: .LBB1_8: # %else6 +; AVX512FVL-NEXT: testb $16, %al +; AVX512FVL-NEXT: jne .LBB1_9 +; AVX512FVL-NEXT: .LBB1_10: # %else8 +; AVX512FVL-NEXT: testb $32, %al +; AVX512FVL-NEXT: jne .LBB1_11 +; AVX512FVL-NEXT: .LBB1_12: # %else10 +; AVX512FVL-NEXT: testb $64, %al +; AVX512FVL-NEXT: jne .LBB1_13 +; AVX512FVL-NEXT: .LBB1_14: # %else12 +; AVX512FVL-NEXT: testb $-128, %al +; AVX512FVL-NEXT: jne .LBB1_15 +; AVX512FVL-NEXT: .LBB1_16: # %else14 +; AVX512FVL-NEXT: vzeroupper +; AVX512FVL-NEXT: retq +; AVX512FVL-NEXT: .LBB1_1: # %cond.store +; AVX512FVL-NEXT: vpextrw $0, %xmm0, (%rdi) +; AVX512FVL-NEXT: testb $2, %al +; AVX512FVL-NEXT: je .LBB1_4 +; AVX512FVL-NEXT: .LBB1_3: # %cond.store1 +; AVX512FVL-NEXT: vpextrw $1, %xmm0, 2(%rdi) +; AVX512FVL-NEXT: testb $4, %al +; AVX512FVL-NEXT: je .LBB1_6 +; AVX512FVL-NEXT: .LBB1_5: # %cond.store3 +; AVX512FVL-NEXT: vpextrw $2, %xmm0, 4(%rdi) +; AVX512FVL-NEXT: testb $8, %al +; AVX512FVL-NEXT: je .LBB1_8 +; AVX512FVL-NEXT: .LBB1_7: # %cond.store5 +; AVX512FVL-NEXT: vpextrw $3, %xmm0, 6(%rdi) +; AVX512FVL-NEXT: testb $16, %al +; AVX512FVL-NEXT: je .LBB1_10 +; AVX512FVL-NEXT: .LBB1_9: # %cond.store7 +; AVX512FVL-NEXT: vpextrw $4, %xmm0, 8(%rdi) +; AVX512FVL-NEXT: testb $32, %al +; AVX512FVL-NEXT: je .LBB1_12 +; AVX512FVL-NEXT: .LBB1_11: # %cond.store9 +; AVX512FVL-NEXT: vpextrw $5, %xmm0, 10(%rdi) +; AVX512FVL-NEXT: testb $64, %al +; AVX512FVL-NEXT: je .LBB1_14 +; AVX512FVL-NEXT: .LBB1_13: # %cond.store11 +; AVX512FVL-NEXT: vpextrw $6, %xmm0, 12(%rdi) +; AVX512FVL-NEXT: testb $-128, %al +; AVX512FVL-NEXT: je .LBB1_16 +; AVX512FVL-NEXT: .LBB1_15: # %cond.store13 +; AVX512FVL-NEXT: vpextrw $7, %xmm0, 14(%rdi) +; AVX512FVL-NEXT: vzeroupper +; AVX512FVL-NEXT: retq +; ; AVX512BW-LABEL: truncstore_v8i64_v8i16: ; AVX512BW: # %bb.0: ; AVX512BW-NEXT: # kill: def $ymm1 killed $ymm1 def $zmm1 @@ -1441,6 +1495,70 @@ define void @truncstore_v8i64_v8i8(<8 x i64> %x, ptr %p, <8 x i32> %mask) { ; AVX512F-NEXT: vzeroupper ; AVX512F-NEXT: retq ; +; AVX512FVL-LABEL: truncstore_v8i64_v8i8: +; AVX512FVL: # %bb.0: +; AVX512FVL-NEXT: vptestmd %ymm1, %ymm1, %k0 +; AVX512FVL-NEXT: vpmovsqb %zmm0, %xmm0 +; AVX512FVL-NEXT: kmovw %k0, %eax +; AVX512FVL-NEXT: testb $1, %al +; AVX512FVL-NEXT: jne .LBB2_1 +; AVX512FVL-NEXT: # %bb.2: # %else +; AVX512FVL-NEXT: testb $2, %al +; AVX512FVL-NEXT: jne .LBB2_3 +; AVX512FVL-NEXT: .LBB2_4: # %else2 +; AVX512FVL-NEXT: testb $4, %al +; AVX512FVL-NEXT: jne .LBB2_5 +; AVX512FVL-NEXT: .LBB2_6: # %else4 +; AVX512FVL-NEXT: testb $8, %al +; AVX512FVL-NEXT: jne .LBB2_7 +; AVX512FVL-NEXT: .LBB2_8: # %else6 +; AVX512FVL-NEXT: testb $16, %al +; AVX512FVL-NEXT: jne .LBB2_9 +; AVX512FVL-NEXT: .LBB2_10: # %else8 +; AVX512FVL-NEXT: testb $32, %al +; AVX512FVL-NEXT: jne .LBB2_11 +; AVX512FVL-NEXT: .LBB2_12: # %else10 +; AVX512FVL-NEXT: testb $64, %al +; AVX512FVL-NEXT: jne .LBB2_13 +; AVX512FVL-NEXT: .LBB2_14: # %else12 +; AVX512FVL-NEXT: testb $-128, %al +; AVX512FVL-NEXT: jne .LBB2_15 +; AVX512FVL-NEXT: .LBB2_16: # %else14 +; AVX512FVL-NEXT: vzeroupper +; AVX512FVL-NEXT: retq +; AVX512FVL-NEXT: .LBB2_1: # %cond.store +; AVX512FVL-NEXT: vpextrb $0, %xmm0, (%rdi) +; AVX512FVL-NEXT: testb $2, %al +; AVX512FVL-NEXT: je .LBB2_4 +; AVX512FVL-NEXT: .LBB2_3: # %cond.store1 +; AVX512FVL-NEXT: vpextrb $1, %xmm0, 1(%rdi) +; AVX512FVL-NEXT: testb $4, %al +; AVX512FVL-NEXT: je .LBB2_6 +; AVX512FVL-NEXT: .LBB2_5: # %cond.store3 +; AVX512FVL-NEXT: vpextrb $2, %xmm0, 2(%rdi) +; AVX512FVL-NEXT: testb $8, %al +; AVX512FVL-NEXT: je .LBB2_8 +; AVX512FVL-NEXT: .LBB2_7: # %cond.store5 +; AVX512FVL-NEXT: vpextrb $3, %xmm0, 3(%rdi) +; AVX512FVL-NEXT: testb $16, %al +; AVX512FVL-NEXT: je .LBB2_10 +; AVX512FVL-NEXT: .LBB2_9: # %cond.store7 +; AVX512FVL-NEXT: vpextrb $4, %xmm0, 4(%rdi) +; AVX512FVL-NEXT: testb $32, %al +; AVX512FVL-NEXT: je .LBB2_12 +; AVX512FVL-NEXT: .LBB2_11: # %cond.store9 +; AVX512FVL-NEXT: vpextrb $5, %xmm0, 5(%rdi) +; AVX512FVL-NEXT: testb $64, %al +; AVX512FVL-NEXT: je .LBB2_14 +; AVX512FVL-NEXT: .LBB2_13: # %cond.store11 +; AVX512FVL-NEXT: vpextrb $6, %xmm0, 6(%rdi) +; AVX512FVL-NEXT: testb $-128, %al +; AVX512FVL-NEXT: je .LBB2_16 +; AVX512FVL-NEXT: .LBB2_15: # %cond.store13 +; AVX512FVL-NEXT: vpextrb $7, %xmm0, 7(%rdi) +; AVX512FVL-NEXT: vzeroupper +; AVX512FVL-NEXT: retq +; ; AVX512BW-LABEL: truncstore_v8i64_v8i8: ; AVX512BW: # %bb.0: ; AVX512BW-NEXT: # kill: def $ymm1 killed $ymm1 def $zmm1 @@ -1658,17 +1776,17 @@ define void @truncstore_v4i64_v4i32(<4 x i64> %x, ptr %p, <4 x i32> %mask) { ; AVX2-NEXT: vzeroupper ; AVX2-NEXT: retq ; -; AVX512F-LABEL: truncstore_v4i64_v4i32: -; AVX512F: # %bb.0: -; AVX512F-NEXT: # kill: def $xmm1 killed $xmm1 def $zmm1 -; AVX512F-NEXT: # kill: def $ymm0 killed $ymm0 def $zmm0 -; AVX512F-NEXT: vptestmd %zmm1, %zmm1, %k0 -; AVX512F-NEXT: kshiftlw $12, %k0, %k0 -; AVX512F-NEXT: kshiftrw $12, %k0, %k1 -; AVX512F-NEXT: vpmovsqd %zmm0, %ymm0 -; AVX512F-NEXT: vmovdqu32 %zmm0, (%rdi) {%k1} -; AVX512F-NEXT: vzeroupper -; AVX512F-NEXT: retq +; AVX512-LABEL: truncstore_v4i64_v4i32: +; AVX512: # %bb.0: +; AVX512-NEXT: # kill: def $xmm1 killed $xmm1 def $zmm1 +; AVX512-NEXT: # kill: def $ymm0 killed $ymm0 def $zmm0 +; AVX512-NEXT: vptestmd %zmm1, %zmm1, %k0 +; AVX512-NEXT: kshiftlw $12, %k0, %k0 +; AVX512-NEXT: kshiftrw $12, %k0, %k1 +; AVX512-NEXT: vpmovsqd %zmm0, %ymm0 +; AVX512-NEXT: vmovdqu32 %zmm0, (%rdi) {%k1} +; AVX512-NEXT: vzeroupper +; AVX512-NEXT: retq ; ; AVX512VL-LABEL: truncstore_v4i64_v4i32: ; AVX512VL: # %bb.0: @@ -1678,18 +1796,6 @@ define void @truncstore_v4i64_v4i32(<4 x i64> %x, ptr %p, <4 x i32> %mask) { ; AVX512VL-NEXT: vpmovqd %ymm0, (%rdi) {%k1} ; AVX512VL-NEXT: vzeroupper ; AVX512VL-NEXT: retq -; -; AVX512BW-LABEL: truncstore_v4i64_v4i32: -; AVX512BW: # %bb.0: -; AVX512BW-NEXT: # kill: def $xmm1 killed $xmm1 def $zmm1 -; AVX512BW-NEXT: # kill: def $ymm0 killed $ymm0 def $zmm0 -; AVX512BW-NEXT: vptestmd %zmm1, %zmm1, %k0 -; AVX512BW-NEXT: kshiftlw $12, %k0, %k0 -; AVX512BW-NEXT: kshiftrw $12, %k0, %k1 -; AVX512BW-NEXT: vpmovsqd %zmm0, %ymm0 -; AVX512BW-NEXT: vmovdqu32 %zmm0, (%rdi) {%k1} -; AVX512BW-NEXT: vzeroupper -; AVX512BW-NEXT: retq %a = icmp ne <4 x i32> %mask, zeroinitializer %b = icmp slt <4 x i64> %x, <i64 2147483647, i64 2147483647, i64 2147483647, i64 2147483647> %c = select <4 x i1> %b, <4 x i64> %x, <4 x i64> <i64 2147483647, i64 2147483647, i64 2147483647, i64 2147483647> @@ -1984,6 +2090,42 @@ define void @truncstore_v4i64_v4i16(<4 x i64> %x, ptr %p, <4 x i32> %mask) { ; AVX512F-NEXT: vzeroupper ; AVX512F-NEXT: retq ; +; AVX512FVL-LABEL: truncstore_v4i64_v4i16: +; AVX512FVL: # %bb.0: +; AVX512FVL-NEXT: vptestmd %xmm1, %xmm1, %k0 +; AVX512FVL-NEXT: vpmovsqw %ymm0, %xmm0 +; AVX512FVL-NEXT: kmovw %k0, %eax +; AVX512FVL-NEXT: testb $1, %al +; AVX512FVL-NEXT: jne .LBB4_1 +; AVX512FVL-NEXT: # %bb.2: # %else +; AVX512FVL-NEXT: testb $2, %al +; AVX512FVL-NEXT: jne .LBB4_3 +; AVX512FVL-NEXT: .LBB4_4: # %else2 +; AVX512FVL-NEXT: testb $4, %al +; AVX512FVL-NEXT: jne .LBB4_5 +; AVX512FVL-NEXT: .LBB4_6: # %else4 +; AVX512FVL-NEXT: testb $8, %al +; AVX512FVL-NEXT: jne .LBB4_7 +; AVX512FVL-NEXT: .LBB4_8: # %else6 +; AVX512FVL-NEXT: vzeroupper +; AVX512FVL-NEXT: retq +; AVX512FVL-NEXT: .LBB4_1: # %cond.store +; AVX512FVL-NEXT: vpextrw $0, %xmm0, (%rdi) +; AVX512FVL-NEXT: testb $2, %al +; AVX512FVL-NEXT: je .LBB4_4 +; AVX512FVL-NEXT: .LBB4_3: # %cond.store1 +; AVX512FVL-NEXT: vpextrw $1, %xmm0, 2(%rdi) +; AVX512FVL-NEXT: testb $4, %al +; AVX512FVL-NEXT: je .LBB4_6 +; AVX512FVL-NEXT: .LBB4_5: # %cond.store3 +; AVX512FVL-NEXT: vpextrw $2, %xmm0, 4(%rdi) +; AVX512FVL-NEXT: testb $8, %al +; AVX512FVL-NEXT: je .LBB4_8 +; AVX512FVL-NEXT: .LBB4_7: # %cond.store5 +; AVX512FVL-NEXT: vpextrw $3, %xmm0, 6(%rdi) +; AVX512FVL-NEXT: vzeroupper +; AVX512FVL-NEXT: retq +; ; AVX512BW-LABEL: truncstore_v4i64_v4i16: ; AVX512BW: # %bb.0: ; AVX512BW-NEXT: # kill: def $xmm1 killed $xmm1 def $zmm1 @@ -2302,6 +2444,42 @@ define void @truncstore_v4i64_v4i8(<4 x i64> %x, ptr %p, <4 x i32> %mask) { ; AVX512F-NEXT: vzeroupper ; AVX512F-NEXT: retq ; +; AVX512FVL-LABEL: truncstore_v4i64_v4i8: +; AVX512FVL: # %bb.0: +; AVX512FVL-NEXT: vptestmd %xmm1, %xmm1, %k0 +; AVX512FVL-NEXT: vpmovsqb %ymm0, %xmm0 +; AVX512FVL-NEXT: kmovw %k0, %eax +; AVX512FVL-NEXT: testb $1, %al +; AVX512FVL-NEXT: jne .LBB5_1 +; AVX512FVL-NEXT: # %bb.2: # %else +; AVX512FVL-NEXT: testb $2, %al +; AVX512FVL-NEXT: jne .LBB5_3 +; AVX512FVL-NEXT: .LBB5_4: # %else2 +; AVX512FVL-NEXT: testb $4, %al +; AVX512FVL-NEXT: jne .LBB5_5 +; AVX512FVL-NEXT: .LBB5_6: # %else4 +; AVX512FVL-NEXT: testb $8, %al +; AVX512FVL-NEXT: jne .LBB5_7 +; AVX512FVL-NEXT: .LBB5_8: # %else6 +; AVX512FVL-NEXT: vzeroupper +; AVX512FVL-NEXT: retq +; AVX512FVL-NEXT: .LBB5_1: # %cond.store +; AVX512FVL-NEXT: vpextrb $0, %xmm0, (%rdi) +; AVX512FVL-NEXT: testb $2, %al +; AVX512FVL-NEXT: je .LBB5_4 +; AVX512FVL-NEXT: .LBB5_3: # %cond.store1 +; AVX512FVL-NEXT: vpextrb $1, %xmm0, 1(%rdi) +; AVX512FVL-NEXT: testb $4, %al +; AVX512FVL-NEXT: je .LBB5_6 +; AVX512FVL-NEXT: .LBB5_5: # %cond.store3 +; AVX512FVL-NEXT: vpextrb $2, %xmm0, 2(%rdi) +; AVX512FVL-NEXT: testb $8, %al +; AVX512FVL-NEXT: je .LBB5_8 +; AVX512FVL-NEXT: .LBB5_7: # %cond.store5 +; AVX512FVL-NEXT: vpextrb $3, %xmm0, 3(%rdi) +; AVX512FVL-NEXT: vzeroupper +; AVX512FVL-NEXT: retq +; ; AVX512BW-LABEL: truncstore_v4i64_v4i8: ; AVX512BW: # %bb.0: ; AVX512BW-NEXT: # kill: def $xmm1 killed $xmm1 def $zmm1 @@ -2451,17 +2629,17 @@ define void @truncstore_v2i64_v2i32(<2 x i64> %x, ptr %p, <2 x i64> %mask) { ; AVX2-NEXT: vpmaskmovd %xmm0, %xmm1, (%rdi) ; AVX2-NEXT: retq ; -; AVX512F-LABEL: truncstore_v2i64_v2i32: -; AVX512F: # %bb.0: -; AVX512F-NEXT: # kill: def $xmm1 killed $xmm1 def $zmm1 -; AVX512F-NEXT: # kill: def $xmm0 killed $xmm0 def $zmm0 -; AVX512F-NEXT: vptestmq %zmm1, %zmm1, %k0 -; AVX512F-NEXT: kshiftlw $14, %k0, %k0 -; AVX512F-NEXT: kshiftrw $14, %k0, %k1 -; AVX512F-NEXT: vpmovsqd %zmm0, %ymm0 -; AVX512F-NEXT: vmovdqu32 %zmm0, (%rdi) {%k1} -; AVX512F-NEXT: vzeroupper -; AVX512F-NEXT: retq +; AVX512-LABEL: truncstore_v2i64_v2i32: +; AVX512: # %bb.0: +; AVX512-NEXT: # kill: def $xmm1 killed $xmm1 def $zmm1 +; AVX512-NEXT: # kill: def $xmm0 killed $xmm0 def $zmm0 +; AVX512-NEXT: vptestmq %zmm1, %zmm1, %k0 +; AVX512-NEXT: kshiftlw $14, %k0, %k0 +; AVX512-NEXT: kshiftrw $14, %k0, %k1 +; AVX512-NEXT: vpmovsqd %zmm0, %ymm0 +; AVX512-NEXT: vmovdqu32 %zmm0, (%rdi) {%k1} +; AVX512-NEXT: vzeroupper +; AVX512-NEXT: retq ; ; AVX512VL-LABEL: truncstore_v2i64_v2i32: ; AVX512VL: # %bb.0: @@ -2470,18 +2648,6 @@ define void @truncstore_v2i64_v2i32(<2 x i64> %x, ptr %p, <2 x i64> %mask) { ; AVX512VL-NEXT: vpmaxsq {{\.?LCPI[0-9]+_[0-9]+}}(%rip){1to2}, %xmm0, %xmm0 ; AVX512VL-NEXT: vpmovqd %xmm0, (%rdi) {%k1} ; AVX512VL-NEXT: retq -; -; AVX512BW-LABEL: truncstore_v2i64_v2i32: -; AVX512BW: # %bb.0: -; AVX512BW-NEXT: # kill: def $xmm1 killed $xmm1 def $zmm1 -; AVX512BW-NEXT: # kill: def $xmm0 killed $xmm0 def $zmm0 -; AVX512BW-NEXT: vptestmq %zmm1, %zmm1, %k0 -; AVX512BW-NEXT: kshiftlw $14, %k0, %k0 -; AVX512BW-NEXT: kshiftrw $14, %k0, %k1 -; AVX512BW-NEXT: vpmovsqd %zmm0, %ymm0 -; AVX512BW-NEXT: vmovdqu32 %zmm0, (%rdi) {%k1} -; AVX512BW-NEXT: vzeroupper -; AVX512BW-NEXT: retq %a = icmp ne <2 x i64> %mask, zeroinitializer %b = icmp slt <2 x i64> %x, <i64 2147483647, i64 2147483647> %c = select <2 x i1> %b, <2 x i64> %x, <2 x i64> <i64 2147483647, i64 2147483647> @@ -2631,6 +2797,26 @@ define void @truncstore_v2i64_v2i16(<2 x i64> %x, ptr %p, <2 x i64> %mask) { ; AVX512F-NEXT: vzeroupper ; AVX512F-NEXT: retq ; +; AVX512FVL-LABEL: truncstore_v2i64_v2i16: +; AVX512FVL: # %bb.0: +; AVX512FVL-NEXT: vptestmq %xmm1, %xmm1, %k0 +; AVX512FVL-NEXT: vpmovsqw %xmm0, %xmm0 +; AVX512FVL-NEXT: kmovw %k0, %eax +; AVX512FVL-NEXT: testb $1, %al +; AVX512FVL-NEXT: jne .LBB7_1 +; AVX512FVL-NEXT: # %bb.2: # %else +; AVX512FVL-NEXT: testb $2, %al +; AVX512FVL-NEXT: jne .LBB7_3 +; AVX512FVL-NEXT: .LBB7_4: # %else2 +; AVX512FVL-NEXT: retq +; AVX512FVL-NEXT: .LBB7_1: # %cond.store +; AVX512FVL-NEXT: vpextrw $0, %xmm0, (%rdi) +; AVX512FVL-NEXT: testb $2, %al +; AVX512FVL-NEXT: je .LBB7_4 +; AVX512FVL-NEXT: .LBB7_3: # %cond.store1 +; AVX512FVL-NEXT: vpextrw $1, %xmm0, 2(%rdi) +; AVX512FVL-NEXT: retq +; ; AVX512BW-LABEL: truncstore_v2i64_v2i16: ; AVX512BW: # %bb.0: ; AVX512BW-NEXT: # kill: def $xmm1 killed $xmm1 def $zmm1 @@ -2797,6 +2983,26 @@ define void @truncstore_v2i64_v2i8(<2 x i64> %x, ptr %p, <2 x i64> %mask) { ; AVX512F-NEXT: vzeroupper ; AVX512F-NEXT: retq ; +; AVX512FVL-LABEL: truncstore_v2i64_v2i8: +; AVX512FVL: # %bb.0: +; AVX512FVL-NEXT: vptestmq %xmm1, %xmm1, %k0 +; AVX512FVL-NEXT: vpmovsqb %xmm0, %xmm0 +; AVX512FVL-NEXT: kmovw %k0, %eax +; AVX512FVL-NEXT: testb $1, %al +; AVX512FVL-NEXT: jne .LBB8_1 +; AVX512FVL-NEXT: # %bb.2: # %else +; AVX512FVL-NEXT: testb $2, %al +; AVX512FVL-NEXT: jne .LBB8_3 +; AVX512FVL-NEXT: .LBB8_4: # %else2 +; AVX512FVL-NEXT: retq +; AVX512FVL-NEXT: .LBB8_1: # %cond.store +; AVX512FVL-NEXT: vpextrb $0, %xmm0, (%rdi) +; AVX512FVL-NEXT: testb $2, %al +; AVX512FVL-NEXT: je .LBB8_4 +; AVX512FVL-NEXT: .LBB8_3: # %cond.store1 +; AVX512FVL-NEXT: vpextrb $1, %xmm0, 1(%rdi) +; AVX512FVL-NEXT: retq +; ; AVX512BW-LABEL: truncstore_v2i64_v2i8: ; AVX512BW: # %bb.0: ; AVX512BW-NEXT: # kill: def $xmm1 killed $xmm1 def $zmm1 @@ -3478,6 +3684,126 @@ define void @truncstore_v16i32_v16i16(<16 x i32> %x, ptr %p, <16 x i32> %mask) { ; AVX512F-NEXT: vzeroupper ; AVX512F-NEXT: retq ; +; AVX512FVL-LABEL: truncstore_v16i32_v16i16: +; AVX512FVL: # %bb.0: +; AVX512FVL-NEXT: vptestmd %zmm1, %zmm1, %k0 +; AVX512FVL-NEXT: vpmovsdw %zmm0, %ymm0 +; AVX512FVL-NEXT: kmovw %k0, %eax +; AVX512FVL-NEXT: testb $1, %al +; AVX512FVL-NEXT: jne .LBB9_1 +; AVX512FVL-NEXT: # %bb.2: # %else +; AVX512FVL-NEXT: testb $2, %al +; AVX512FVL-NEXT: jne .LBB9_3 +; AVX512FVL-NEXT: .LBB9_4: # %else2 +; AVX512FVL-NEXT: testb $4, %al +; AVX512FVL-NEXT: jne .LBB9_5 +; AVX512FVL-NEXT: .LBB9_6: # %else4 +; AVX512FVL-NEXT: testb $8, %al +; AVX512FVL-NEXT: jne .LBB9_7 +; AVX512FVL-NEXT: .LBB9_8: # %else6 +; AVX512FVL-NEXT: testb $16, %al +; AVX512FVL-NEXT: jne .LBB9_9 +; AVX512FVL-NEXT: .LBB9_10: # %else8 +; AVX512FVL-NEXT: testb $32, %al +; AVX512FVL-NEXT: jne .LBB9_11 +; AVX512FVL-NEXT: .LBB9_12: # %else10 +; AVX512FVL-NEXT: testb $64, %al +; AVX512FVL-NEXT: jne .LBB9_13 +; AVX512FVL-NEXT: .LBB9_14: # %else12 +; AVX512FVL-NEXT: testb %al, %al +; AVX512FVL-NEXT: jns .LBB9_16 +; AVX512FVL-NEXT: .LBB9_15: # %cond.store13 +; AVX512FVL-NEXT: vpextrw $7, %xmm0, 14(%rdi) +; AVX512FVL-NEXT: .LBB9_16: # %else14 +; AVX512FVL-NEXT: testl $256, %eax # imm = 0x100 +; AVX512FVL-NEXT: vextracti128 $1, %ymm0, %xmm0 +; AVX512FVL-NEXT: jne .LBB9_17 +; AVX512FVL-NEXT: # %bb.1... [truncated] 
@RKSimon RKSimon merged commit 28a8dfb into llvm:main Sep 25, 2025
11 checks passed
@RKSimon RKSimon deleted the x86-trunc-tests-missing-prefixes branch September 25, 2025 09:46
ckoparkar added a commit to ckoparkar/llvm-project that referenced this pull request Sep 25, 2025
* main: (502 commits) GlobalISel: Adjust insert point when expanding G_[SU]DIVREM (llvm#160683) [LV] Add coverage for fixing-up scalar resume values (llvm#160492) AMDGPU: Convert wave_any test to use update_mc_test_checks [LV] Add partial reduction tests multiplying extend with constants. Revert "[MLIR] Implement remark emitting policies in MLIR" (llvm#160681) [NFC][InstSimplify] Refactor fminmax-folds.ll test (llvm#160504) [LoongArch] Pre-commit tests for [x]vldi instructions with special constant splats (llvm#159228) [BOLT] Fix dwarf5-dwoid-no-dwoname.s (llvm#160676) [lldb][test] Refactor and expand TestMemoryRegionDirtyPages.py (llvm#156035) [gn build] Port 833d5f0 AMDGPU: Ensure both wavesize features are not set (llvm#159234) [LoopInterchange] Bail out when finding a dependency with all `*` elements (llvm#149049) [libc++] Avoid constructing additional objects when using map::at (llvm#157866) [lldb][test] Make hex prefix optional in DWARF union types test [X86] Add missing prefixes to trunc-sat tests (llvm#160662) [AMDGPU] Fix vector legalization for bf16 valu ops (llvm#158439) [LoongArch][NFC] Pre-commit tests for `[x]vadda.{b/h/w/d}` [mlir][tosa] Relax constraint on matmul verifier requiring equal operand types (llvm#155799) [clang][Sema] Accept gnu format attributes (llvm#160255) [LoongArch][NFC] Add tests for element extraction from binary add operation (llvm#159725) ...
mahesh-attarde pushed a commit to mahesh-attarde/llvm-project that referenced this pull request Oct 3, 2025
Since llvm#159321 we now get actual warnings when we're missing coverage
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

2 participants