-
Couldn't load subscription status.
- Fork 15k
[AArch64][GlobalISel] SIMD fpcvt codegen for rounding nodes #165546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
Lukacma wants to merge 2 commits into llvm:main Choose a base branch from Lukacma:cvt-round
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline, and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
| @llvm/pr-subscribers-backend-aarch64 @llvm/pr-subscribers-llvm-globalisel Author: None (Lukacma) ChangesThis is followup patch to #157680, which allows simd fpcvt instructions to be generated from l/llround and l/llrint nodes. Patch is 23.23 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/165546.diff 6 Files Affected:
diff --git a/llvm/lib/Target/AArch64/AArch64InstrInfo.td b/llvm/lib/Target/AArch64/AArch64InstrInfo.td index b9e299ef37454..f765c6e037176 100644 --- a/llvm/lib/Target/AArch64/AArch64InstrInfo.td +++ b/llvm/lib/Target/AArch64/AArch64InstrInfo.td @@ -6799,6 +6799,79 @@ defm : FPToIntegerPats<fp_to_uint, fp_to_uint_sat, fp_to_uint_sat_gi, ftrunc, "F defm : FPToIntegerPats<fp_to_sint, fp_to_sint_sat, fp_to_sint_sat_gi, fround, "FCVTAS">; defm : FPToIntegerPats<fp_to_uint, fp_to_uint_sat, fp_to_uint_sat_gi, fround, "FCVTAU">; +// For global-isel we can use register classes to determine +// which FCVT instruction to use. +let Predicates = [HasFPRCVT] in { +def : Pat<(i64 (any_lround f32:$Rn)), + (FCVTASDSr f32:$Rn)>; +def : Pat<(i64 (any_llround f32:$Rn)), + (FCVTASDSr f32:$Rn)>; +} +def : Pat<(i64 (any_lround f64:$Rn)), + (FCVTASv1i64 f64:$Rn)>; +def : Pat<(i64 (any_llround f64:$Rn)), + (FCVTASv1i64 f64:$Rn)>; + +let Predicates = [HasFPRCVT] in { + def : Pat<(f32 (bitconvert (i32 (any_lround f16:$Rn)))), + (FCVTASSHr f16:$Rn)>; + def : Pat<(f64 (bitconvert (i64 (any_lround f16:$Rn)))), + (FCVTASDHr f16:$Rn)>; + def : Pat<(f64 (bitconvert (i64 (any_llround f16:$Rn)))), + (FCVTASDHr f16:$Rn)>; + def : Pat<(f64 (bitconvert (i64 (any_lround f32:$Rn)))), + (FCVTASDSr f32:$Rn)>; + def : Pat<(f32 (bitconvert (i32 (any_lround f64:$Rn)))), + (FCVTASSDr f64:$Rn)>; + def : Pat<(f64 (bitconvert (i64 (any_llround f32:$Rn)))), + (FCVTASDSr f32:$Rn)>; +} +def : Pat<(f32 (bitconvert (i32 (any_lround f32:$Rn)))), + (FCVTASv1i32 f32:$Rn)>; +def : Pat<(f64 (bitconvert (i64 (any_lround f64:$Rn)))), + (FCVTASv1i64 f64:$Rn)>; +def : Pat<(f64 (bitconvert (i64 (any_llround f64:$Rn)))), + (FCVTASv1i64 f64:$Rn)>; + +// For global-isel we can use register classes to determine +// which FCVT instruction to use. +let Predicates = [HasFPRCVT] in { +def : Pat<(i64 (any_lrint f16:$Rn)), + (FCVTZSDHr (FRINTXHr f16:$Rn))>; +def : Pat<(i64 (any_llrint f16:$Rn)), + (FCVTZSDHr (FRINTXHr f16:$Rn))>; +def : Pat<(i64 (any_lrint f32:$Rn)), + (FCVTZSDSr (FRINTXSr f32:$Rn))>; +def : Pat<(i64 (any_llrint f32:$Rn)), + (FCVTZSDSr (FRINTXSr f32:$Rn))>; +} +def : Pat<(i64 (any_lrint f64:$Rn)), + (FCVTZSv1i64 (FRINTXDr f64:$Rn))>; +def : Pat<(i64 (any_llrint f64:$Rn)), + (FCVTZSv1i64 (FRINTXDr f64:$Rn))>; + +let Predicates = [HasFPRCVT] in { + def : Pat<(f32 (bitconvert (i32 (any_lrint f16:$Rn)))), + (FCVTZSSHr (FRINTXHr f16:$Rn))>; + def : Pat<(f64 (bitconvert (i64 (any_lrint f16:$Rn)))), + (FCVTZSDHr (FRINTXHr f16:$Rn))>; + def : Pat<(f64 (bitconvert (i64 (any_llrint f16:$Rn)))), + (FCVTZSDHr (FRINTXHr f16:$Rn))>; + def : Pat<(f64 (bitconvert (i64 (any_lrint f32:$Rn)))), + (FCVTZSDSr (FRINTXSr f32:$Rn))>; + def : Pat<(f32 (bitconvert (i32 (any_lrint f64:$Rn)))), + (FCVTZSSDr (FRINTXDr f64:$Rn))>; + def : Pat<(f64 (bitconvert (i64 (any_llrint f32:$Rn)))), + (FCVTZSDSr (FRINTXSr f32:$Rn))>; +} +def : Pat<(f32 (bitconvert (i32 (any_lrint f32:$Rn)))), + (FCVTZSv1i32 (FRINTXSr f32:$Rn))>; +def : Pat<(f64 (bitconvert (i64 (any_lrint f64:$Rn)))), + (FCVTZSv1i64 (FRINTXDr f64:$Rn))>; +def : Pat<(f64 (bitconvert (i64 (any_llrint f64:$Rn)))), + (FCVTZSv1i64 (FRINTXDr f64:$Rn))>; + + // f16 -> s16 conversions let Predicates = [HasFullFP16] in { def : Pat<(i16(fp_to_sint_sat_gi f16:$Rn)), (FCVTZSv1f16 f16:$Rn)>; diff --git a/llvm/lib/Target/AArch64/GISel/AArch64RegisterBankInfo.cpp b/llvm/lib/Target/AArch64/GISel/AArch64RegisterBankInfo.cpp index 6d2d70511e894..8bd982898b8d6 100644 --- a/llvm/lib/Target/AArch64/GISel/AArch64RegisterBankInfo.cpp +++ b/llvm/lib/Target/AArch64/GISel/AArch64RegisterBankInfo.cpp @@ -858,7 +858,11 @@ AArch64RegisterBankInfo::getInstrMapping(const MachineInstr &MI) const { case TargetOpcode::G_FPTOSI_SAT: case TargetOpcode::G_FPTOUI_SAT: case TargetOpcode::G_FPTOSI: - case TargetOpcode::G_FPTOUI: { + case TargetOpcode::G_FPTOUI: + case TargetOpcode::G_INTRINSIC_LRINT: + case TargetOpcode::G_INTRINSIC_LLRINT: + case TargetOpcode::G_LROUND: + case TargetOpcode::G_LLROUND: { LLT DstType = MRI.getType(MI.getOperand(0).getReg()); if (DstType.isVector()) break; @@ -879,12 +883,6 @@ AArch64RegisterBankInfo::getInstrMapping(const MachineInstr &MI) const { OpRegBankIdx = {PMI_FirstGPR, PMI_FirstFPR}; break; } - case TargetOpcode::G_INTRINSIC_LRINT: - case TargetOpcode::G_INTRINSIC_LLRINT: - if (MRI.getType(MI.getOperand(0).getReg()).isVector()) - break; - OpRegBankIdx = {PMI_FirstGPR, PMI_FirstFPR}; - break; case TargetOpcode::G_FCMP: { // If the result is a vector, it must use a FPR. AArch64GenRegisterBankInfo::PartialMappingIdx Idx0 = @@ -1224,12 +1222,6 @@ AArch64RegisterBankInfo::getInstrMapping(const MachineInstr &MI) const { } break; } - case TargetOpcode::G_LROUND: - case TargetOpcode::G_LLROUND: { - // Source is always floating point and destination is always integer. - OpRegBankIdx = {PMI_FirstGPR, PMI_FirstFPR}; - break; - } } // Finally construct the computed mapping. diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/regbank-llround.mir b/llvm/test/CodeGen/AArch64/GlobalISel/regbank-llround.mir index 420c7cfb07b74..16100f01017a6 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/regbank-llround.mir +++ b/llvm/test/CodeGen/AArch64/GlobalISel/regbank-llround.mir @@ -14,7 +14,7 @@ body: | ; CHECK: liveins: $d0 ; CHECK-NEXT: {{ $}} ; CHECK-NEXT: %fpr:fpr(s64) = COPY $d0 - ; CHECK-NEXT: %llround:gpr(s64) = G_LLROUND %fpr(s64) + ; CHECK-NEXT: %llround:fpr(s64) = G_LLROUND %fpr(s64) ; CHECK-NEXT: $d0 = COPY %llround(s64) ; CHECK-NEXT: RET_ReallyLR implicit $s0 %fpr:_(s64) = COPY $d0 @@ -35,7 +35,7 @@ body: | ; CHECK-NEXT: {{ $}} ; CHECK-NEXT: %gpr:gpr(s64) = COPY $x0 ; CHECK-NEXT: [[COPY:%[0-9]+]]:fpr(s64) = COPY %gpr(s64) - ; CHECK-NEXT: %llround:gpr(s64) = G_LLROUND [[COPY]](s64) + ; CHECK-NEXT: %llround:fpr(s64) = G_LLROUND [[COPY]](s64) ; CHECK-NEXT: $d0 = COPY %llround(s64) ; CHECK-NEXT: RET_ReallyLR implicit $s0 %gpr:_(s64) = COPY $x0 diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/regbank-lround.mir b/llvm/test/CodeGen/AArch64/GlobalISel/regbank-lround.mir index 775c6ca773c68..5cb93f7c4646d 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/regbank-lround.mir +++ b/llvm/test/CodeGen/AArch64/GlobalISel/regbank-lround.mir @@ -14,7 +14,7 @@ body: | ; CHECK: liveins: $d0 ; CHECK-NEXT: {{ $}} ; CHECK-NEXT: %fpr:fpr(s64) = COPY $d0 - ; CHECK-NEXT: %lround:gpr(s64) = G_LROUND %fpr(s64) + ; CHECK-NEXT: %lround:fpr(s64) = G_LROUND %fpr(s64) ; CHECK-NEXT: $d0 = COPY %lround(s64) ; CHECK-NEXT: RET_ReallyLR implicit $s0 %fpr:_(s64) = COPY $d0 @@ -35,7 +35,7 @@ body: | ; CHECK-NEXT: {{ $}} ; CHECK-NEXT: %gpr:gpr(s64) = COPY $x0 ; CHECK-NEXT: [[COPY:%[0-9]+]]:fpr(s64) = COPY %gpr(s64) - ; CHECK-NEXT: %lround:gpr(s64) = G_LROUND [[COPY]](s64) + ; CHECK-NEXT: %lround:fpr(s64) = G_LROUND [[COPY]](s64) ; CHECK-NEXT: $d0 = COPY %lround(s64) ; CHECK-NEXT: RET_ReallyLR implicit $s0 %gpr:_(s64) = COPY $x0 diff --git a/llvm/test/CodeGen/AArch64/arm64-cvt-simd-round-rint.ll b/llvm/test/CodeGen/AArch64/arm64-cvt-simd-round-rint.ll new file mode 100644 index 0000000000000..000ff64131ccf --- /dev/null +++ b/llvm/test/CodeGen/AArch64/arm64-cvt-simd-round-rint.ll @@ -0,0 +1,428 @@ +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5 +; RUN: llc < %s -mtriple aarch64-unknown-unknown -mattr=+fprcvt,+fullfp16 | FileCheck %s --check-prefixes=CHECK,CHECK-SD +; RUN: llc < %s -mtriple aarch64-unknown-unknown -global-isel -global-isel-abort=2 -mattr=+fprcvt,+fullfp16 2>&1 | FileCheck %s --check-prefixes=CHECK,CHECK-GI + +; CHECK-GI: warning: Instruction selection used fallback path for lround_i32_f16_simd +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f16_simd +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f64_simd +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f32_simd +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llround_i64_f16_simd +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f16_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f16_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f32_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f64_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i32_f32_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lround_i64_f64_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llround_i64_f16_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llround_i64_f32_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llround_i64_f64_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f16_simd +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f64_simd +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f32_simd +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f16_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i64_f16_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i64_f32_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f64_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i32_f32_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for lrint_i64_f64_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llrint_i64_f16_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llrint_i64_f32_simd_exp +; CHECK-GI-NEXT: warning: Instruction selection used fallback path for llrint_i64_f64_simd_exp + +; +; (L/LL)Round +; + +define float @lround_i32_f16_simd(half %x) { +; CHECK-LABEL: lround_i32_f16_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas s0, h0 +; CHECK-NEXT: ret + %val = call i32 @llvm.lround.i32.f16(half %x) + %sum = bitcast i32 %val to float + ret float %sum +} + +define double @lround_i64_f16_simd(half %x) { +; CHECK-LABEL: lround_i64_f16_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas d0, h0 +; CHECK-NEXT: ret + %val = call i64 @llvm.lround.i64.f16(half %x) + %bc = bitcast i64 %val to double + ret double %bc +} + +define double @lround_i64_f32_simd(float %x) { +; CHECK-LABEL: lround_i64_f32_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas d0, s0 +; CHECK-NEXT: ret + %val = call i64 @llvm.lround.i64.f32(float %x) + %bc = bitcast i64 %val to double + ret double %bc +} + +define float @lround_i32_f64_simd(double %x) { +; CHECK-LABEL: lround_i32_f64_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas s0, d0 +; CHECK-NEXT: ret + %val = call i32 @llvm.lround.i32.f64(double %x) + %bc = bitcast i32 %val to float + ret float %bc +} + +define float @lround_i32_f32_simd(float %x) { +; CHECK-LABEL: lround_i32_f32_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas s0, s0 +; CHECK-NEXT: ret + %val = call i32 @llvm.lround.i32.f32(float %x) + %bc = bitcast i32 %val to float + ret float %bc +} + +define double @lround_i64_f64_simd(double %x) { +; CHECK-LABEL: lround_i64_f64_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas d0, d0 +; CHECK-NEXT: ret + %val = call i64 @llvm.lround.i64.f64(double %x) + %bc = bitcast i64 %val to double + ret double %bc +} + +define double @llround_i64_f16_simd(half %x) { +; CHECK-LABEL: llround_i64_f16_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas d0, h0 +; CHECK-NEXT: ret + %val = call i64 @llvm.llround.i64.f16(half %x) + %sum = bitcast i64 %val to double + ret double %sum +} + +define double @llround_i64_f32_simd(float %x) { +; CHECK-LABEL: llround_i64_f32_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas d0, s0 +; CHECK-NEXT: ret + %val = call i64 @llvm.llround.i64.f32(float %x) + %bc = bitcast i64 %val to double + ret double %bc +} + +define double @llround_i64_f64_simd(double %x) { +; CHECK-LABEL: llround_i64_f64_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas d0, d0 +; CHECK-NEXT: ret + %val = call i64 @llvm.llround.i64.f64(double %x) + %bc = bitcast i64 %val to double + ret double %bc +} + + +; +; (L/LL)Round experimental +; + +define float @lround_i32_f16_simd_exp(half %x) { +; CHECK-LABEL: lround_i32_f16_simd_exp: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas s0, h0 +; CHECK-NEXT: ret + %val = call i32 @llvm.experimental.constrained.lround.i32.f16(half %x, metadata !"fpexcept.strict") + %sum = bitcast i32 %val to float + ret float %sum +} + +define double @lround_i64_f16_simd_exp(half %x) { +; CHECK-LABEL: lround_i64_f16_simd_exp: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas d0, h0 +; CHECK-NEXT: ret + %val = call i64 @llvm.experimental.constrained.lround.i64.f16(half %x, metadata !"fpexcept.strict") + %bc = bitcast i64 %val to double + ret double %bc +} + +define double @lround_i64_f32_simd_exp(float %x) { +; CHECK-LABEL: lround_i64_f32_simd_exp: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas d0, s0 +; CHECK-NEXT: ret + %val = call i64 @llvm.experimental.constrained.lround.i64.f32(float %x, metadata !"fpexcept.strict") + %bc = bitcast i64 %val to double + ret double %bc +} + +define float @lround_i32_f64_simd_exp(double %x) { +; CHECK-LABEL: lround_i32_f64_simd_exp: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas s0, d0 +; CHECK-NEXT: ret + %val = call i32 @llvm.experimental.constrained.lround.i32.f64(double %x, metadata !"fpexcept.strict") + %bc = bitcast i32 %val to float + ret float %bc +} + +define float @lround_i32_f32_simd_exp(float %x) { +; CHECK-LABEL: lround_i32_f32_simd_exp: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas s0, s0 +; CHECK-NEXT: ret + %val = call i32 @llvm.experimental.constrained.lround.i32.f32(float %x, metadata !"fpexcept.strict") + %bc = bitcast i32 %val to float + ret float %bc +} + +define double @lround_i64_f64_simd_exp(double %x) { +; CHECK-LABEL: lround_i64_f64_simd_exp: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas d0, d0 +; CHECK-NEXT: ret + %val = call i64 @llvm.experimental.constrained.lround.i64.f64(double %x, metadata !"fpexcept.strict") + %bc = bitcast i64 %val to double + ret double %bc +} + +define double @llround_i64_f16_simd_exp(half %x) { +; CHECK-LABEL: llround_i64_f16_simd_exp: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas d0, h0 +; CHECK-NEXT: ret + %val = call i64 @llvm.experimental.constrained.llround.i64.f16(half %x, metadata !"fpexcept.strict") + %sum = bitcast i64 %val to double + ret double %sum +} + +define double @llround_i64_f32_simd_exp(float %x) { +; CHECK-LABEL: llround_i64_f32_simd_exp: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas d0, s0 +; CHECK-NEXT: ret + %val = call i64 @llvm.experimental.constrained.llround.i64.f32(float %x, metadata !"fpexcept.strict") + %bc = bitcast i64 %val to double + ret double %bc +} + +define double @llround_i64_f64_simd_exp(double %x) { +; CHECK-LABEL: llround_i64_f64_simd_exp: +; CHECK: // %bb.0: +; CHECK-NEXT: fcvtas d0, d0 +; CHECK-NEXT: ret + %val = call i64 @llvm.experimental.constrained.llround.i64.f64(double %x, metadata !"fpexcept.strict") + %bc = bitcast i64 %val to double + ret double %bc +} + +; +; (L/LL)Rint +; + +define float @lrint_i32_f16_simd(half %x) { +; CHECK-LABEL: lrint_i32_f16_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: frintx h0, h0 +; CHECK-NEXT: fcvtzs s0, h0 +; CHECK-NEXT: ret + %val = call i32 @llvm.lrint.i32.f16(half %x) + %sum = bitcast i32 %val to float + ret float %sum +} + +define double @lrint_i64_f16_simd(half %x) { +; CHECK-LABEL: lrint_i64_f16_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: frintx h0, h0 +; CHECK-NEXT: fcvtzs d0, h0 +; CHECK-NEXT: ret + %val = call i64 @llvm.lrint.i53.f16(half %x) + %bc = bitcast i64 %val to double + ret double %bc +} + +define double @lrint_i64_f32_simd(float %x) { +; CHECK-LABEL: lrint_i64_f32_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: frintx s0, s0 +; CHECK-NEXT: fcvtzs d0, s0 +; CHECK-NEXT: ret + %val = call i64 @llvm.lrint.i64.f32(float %x) + %bc = bitcast i64 %val to double + ret double %bc +} + +define float @lrint_i32_f64_simd(double %x) { +; CHECK-LABEL: lrint_i32_f64_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: frintx d0, d0 +; CHECK-NEXT: fcvtzs s0, d0 +; CHECK-NEXT: ret + %val = call i32 @llvm.lrint.i32.f64(double %x) + %bc = bitcast i32 %val to float + ret float %bc +} + +define float @lrint_i32_f32_simd(float %x) { +; CHECK-LABEL: lrint_i32_f32_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: frintx s0, s0 +; CHECK-NEXT: fcvtzs s0, s0 +; CHECK-NEXT: ret + %val = call i32 @llvm.lrint.i32.f32(float %x) + %bc = bitcast i32 %val to float + ret float %bc +} + +define double @lrint_i64_f64_simd(double %x) { +; CHECK-LABEL: lrint_i64_f64_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: frintx d0, d0 +; CHECK-NEXT: fcvtzs d0, d0 +; CHECK-NEXT: ret + %val = call i64 @llvm.lrint.i64.f64(double %x) + %bc = bitcast i64 %val to double + ret double %bc +} + +define double @llrint_i64_f16_simd(half %x) { +; CHECK-LABEL: llrint_i64_f16_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: frintx h0, h0 +; CHECK-NEXT: fcvtzs d0, h0 +; CHECK-NEXT: ret + %val = call i64 @llvm.llrint.i64.f16(half %x) + %sum = bitcast i64 %val to double + ret double %sum +} + +define double @llrint_i64_f32_simd(float %x) { +; CHECK-LABEL: llrint_i64_f32_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: frintx s0, s0 +; CHECK-NEXT: fcvtzs d0, s0 +; CHECK-NEXT: ret + %val = call i64 @llvm.llrint.i64.f32(float %x) + %bc = bitcast i64 %val to double + ret double %bc +} + +define double @llrint_i64_f64_simd(double %x) { +; CHECK-LABEL: llrint_i64_f64_simd: +; CHECK: // %bb.0: +; CHECK-NEXT: frintx d0, d0 +; CHECK-NEXT: fcvtzs d0, d0 +; CHECK-NEXT: ret + %val = call i64 @llvm.llrint.i64.f64(double %x) + %bc = bitcast i64 %val to double + ret double %bc +} + +; +; (L/LL)Rint experimental +; + +define float @lrint_i32_f16_simd_exp(half %x) { +; CHECK-LABEL: lrint_i32_f16_simd_exp: +; CHECK: // %bb.0: +; CHECK-NEXT: frintx h0, h0 +; CHECK-NEXT: fcvtzs s0, h0 +; CHECK-NEXT: ret + %val = call i32 @llvm.experimental.constrained.lrint.i32.f16(half %x, metadata !"round.tonearest", metadata !"fpexcept.strict") + %sum = bitcast i32 %val to float + ret float %sum +} + +define double @lrint_i64_f16_simd_exp(half %x) { +; CHECK-LABEL: lrint_i64_f16_simd_exp: +; CHECK: // %bb.0: +; CHECK-NEXT: frintx h0, h0 +; CHECK-NEXT: fcvtzs d0, h0 +; CHECK-NEXT: ret + %val = call i64 @llvm.experimental.constrained.lrint.i53.f16(half %x, metadata !"round.tonearest", metadata !"fpexcept.strict") + %bc = bitcast i64 %val to double + ret double %bc +} + +define double @lrint_i64_f32_simd_exp(float %x) { +; CHECK-LABEL: lrint_i64_f32_simd_exp: +; CHECK: // %bb.0: +; CHECK-NEXT: frintx s0, s0 +; CHECK-NEXT: fcvtzs d0, s0 +; CHECK-NEXT: ret + %val = call i64 @llvm.experimental.constrained.lrint.i64.f32(float %x, metadata !"round.tonearest", metadata !"fpexcept.strict") + %bc = bitcast i64 %val to double + ret double %bc +} + +define float @lrint_i32_f64_simd_exp(double %x) { +; CHECK-LABEL: lrint_i32_f64_simd_exp: +; CHECK: // %bb.0: +; CHECK-NEXT: frintx d0, d0 +; CHECK-NEXT: fcvtzs s0, d0 +; CHECK-NEXT: ret + %val = call i32 @llvm.experimental.constrained.lrint.i32.f64(double %x, ... [truncated] |
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.
This is followup patch to #157680, which allows simd fpcvt instructions to be generated from l/llround and l/llrint nodes.