Skip to content

Conversation

@andrewor14
Copy link
Contributor

Summary: Today we hit this error with fp32 inputs + bias:

RuntimeError: Bias is not supported when module weight is in fp32 (out_dtype=Float32). Please use bfloat16 or float16 weights, or remove the bias from the linear layer. 

This is thrown by NVFP4DynamicActivationNVFP4WeightConfig but it's trying to guard against this underlying _scaled_mm error:

RuntimeError: Bias is not supported when out_dtype is set to Float32 

This commit works around these errors by adding the bias separately in this case, similar to what float8 does.

Test Plan:

pytest test/prototype/mx_formats/test_inference_workflow.py -k test_inference_workflow_nvfp4 
**Summary:** Today we hit this error with fp32 inputs + bias: ``` RuntimeError: Bias is not supported when module weight is in fp32 (out_dtype=Float32). Please use bfloat16 or float16 weights, or remove the bias from the linear layer. ``` This is thrown by `NVFP4DynamicActivationNVFP4WeightConfig` but it's trying to guard against this underlying `_scaled_mm` error: ``` RuntimeError: Bias is not supported when out_dtype is set to Float32 ``` This commit works around these errors by adding the bias separately in this case, similar to what float8 does. **Test Plan:** ``` pytest test/prototype/mx_formats/test_inference_workflow.py -k test_inference_workflow_nvfp4 ```
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 22, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3525

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 83c92e3 with merge base 486fe0d (image):

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 22, 2025
@andrewor14 andrewor14 requested review from drisspg and vkuzo December 22, 2025 16:15
@andrewor14 andrewor14 added the topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories) label Dec 22, 2025
# since bias is not quantized
should_add_bias_separately = (scale_result is not None) and (bias is not None)
#
# (2) RuntimeError: Bias is not supported when out_dtype is set to Float32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay this is what I thought would happen

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah but this only happens if per_tensor_scale=None (by default it is not), so users generally won't run into the _scaled_mm error. Either way this PR fixes that case

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you just create a note somewhere on the conversions / casting path for the paths?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories)

4 participants