Transformers documentation

torchao

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v5.0.0rc1).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

torchao

Open In Colab: Torchao Demo

torchao is a PyTorch architecture optimization library with support for custom high performance data types, quantization, and sparsity. It is composable with native PyTorch features such as torch.compile for even faster inference and training.

See the table below for additional torchao features.

Feature Description
Quantization Aware Training (QAT) Train quantized models with minimal accuracy loss (see QAT README)
Float8 Training High-throughput training with float8 formats (see torchtitan and Accelerate docs)
Sparsity Support Semi-structured (2:4) sparsity for faster inference (see Accelerating Neural Network Training with Semi-Structured (2:4) Sparsity blog post)
Optimizer Quantization Reduce optimizer state memory with 4 and 8-bit variants of Adam
KV Cache Quantization Enables long context inference with lower memory (see KV Cache Quantization)
Custom Kernels Support use your own torch.compile compatible ops
FSDP2 Composable with FSDP2 for training

Refer to the torchao README.md for more details about the library.

torchao supports the quantization techniques below.

  • A16W8 Float8 Dynamic Quantization
  • A16W8 Float8 WeightOnly Quantization
  • A8W8 Int8 Dynamic Quantization
  • A16W8 Int8 Weight Only Quantization
  • A16W4 Int4 Weight Only Quantization
  • A16W4 Int4 Weight Only Quantization + 2:4 Sparsity
  • Autoquantization

torchao also supports module level configuration by specifying a dictionary from fully qualified name of module and its corresponding quantization config. This allows skip quantizing certain layers and using different quantization config for different modules.

Check the table below to see if your hardware is compatible.

Component Compatibility
CUDA Versions ✅ cu118, cu126, cu128
XPU Versions ✅ pytorch2.8
CPU ✅ change device_map="cpu" (see examples below)

Install torchao from PyPi or the PyTorch index with the following commands.

PyPi
PyTorch Index
# Updating 🤗 Transformers to the latest version, as the example script below uses the new auto compilation # Stable release from Pypi which will default to CUDA 12.6 pip install --upgrade torchao transformers

If your torchao version is below 0.10.0, you need to upgrade it, please refer to the deprecation notice for more details.

Quantization examples

TorchAO provides a variety of quantization configurations. Each configuration can be further customized with parameters such as group_size, scheme, and layout to optimize for specific hardware and model architectures.

For a complete list of available configurations, see the quantization API documentation.

You can manually choose the quantization types and settings or automatically select the quantization types.

Create a TorchAoConfig and specify the quantization type and group_size of the weights to quantize (for int8 weight only and int4 weight only). Set the cache_implementation to "static" to automatically torch.compile the forward method.

We’ll show examples for recommended quantization methods based on hardwares, e.g. A100 GPU, H100 GPU, CPU.

H100 GPU

float8-dynamic-and-weight-only
int4-weight-only
import torch from transformers import TorchAoConfig, AutoModelForCausalLM, AutoTokenizer from torchao.quantization import Float8DynamicActivationFloat8WeightConfig, Float8WeightOnlyConfig quant_config = Float8DynamicActivationFloat8WeightConfig() # or float8 weight only quantization # quant_config = Float8WeightOnlyConfig() quantization_config = TorchAoConfig(quant_type=quant_config) # Load and quantize the model quantized_model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-3.1-8B-Instruct", dtype="auto", device_map="auto", quantization_config=quantization_config ) tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct") input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt").to(model.device) # auto-compile the quantized model with `cache_implementation="static"` to get speed up output = quantized_model.generate(**input_ids, max_new_tokens=10, cache_implementation="static") print(tokenizer.decode(output[0], skip_special_tokens=True))
</hfoption> <hfoption id="int4-weight-only-24sparse">
import torch from transformers import TorchAoConfig, AutoModelForCausalLM, AutoTokenizer from torchao.quantization import Int4WeightOnlyConfig from torchao.dtypes import MarlinSparseLayout quant_config = Int4WeightOnlyConfig(layout=MarlinSparseLayout()) quantization_config = TorchAoConfig(quant_type=quant_config) # Load and quantize the model with sparsity. A sparse checkpoint is needed to accelerate without accuracy loss quantized_model = AutoModelForCausalLM.from_pretrained( "RedHatAI/Sparse-Llama-3.1-8B-2of4", dtype=torch.float16, device_map="auto", quantization_config=quantization_config ) tokenizer = AutoTokenizer.from_pretrained("RedHatAI/Sparse-Llama-3.1-8B-2of4") input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt").to(model.device) # auto-compile the quantized model with `cache_implementation="static"` to get speed up output = quantized_model.generate(**input_ids, max_new_tokens=10, cache_implementation="static") print(tokenizer.decode(output[0], skip_special_tokens=True))
</hfoption> </hfoptions>

A100 GPU

int8-dynamic-and-weight-only
int4-weight-only
import torch from transformers import TorchAoConfig, AutoModelForCausalLM, AutoTokenizer from torchao.quantization import Int8DynamicActivationInt8WeightConfig, Int8WeightOnlyConfig quant_config = Int8DynamicActivationInt8WeightConfig() # or int8 weight only quantization # quant_config = Int8WeightOnlyConfig() quantization_config = TorchAoConfig(quant_type=quant_config) # Load and quantize the model quantized_model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-3.1-8B-Instruct", dtype="auto", device_map="auto", quantization_config=quantization_config ) tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct") input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt").to(model.device) # auto-compile the quantized model with `cache_implementation="static"` to get speed up output = quantized_model.generate(**input_ids, max_new_tokens=10, cache_implementation="static") print(tokenizer.decode(output[0], skip_special_tokens=True))
</hfoption> <hfoption id="int4-weight-only-24sparse">
import torch from transformers import TorchAoConfig, AutoModelForCausalLM, AutoTokenizer from torchao.quantization import Int4WeightOnlyConfig from torchao.dtypes import MarlinSparseLayout quant_config = Int4WeightOnlyConfig(layout=MarlinSparseLayout()) quantization_config = TorchAoConfig(quant_type=quant_config) # Load and quantize the model with sparsity. A sparse checkpoint is needed to accelerate without accuracy loss quantized_model = AutoModelForCausalLM.from_pretrained( "RedHatAI/Sparse-Llama-3.1-8B-2of4", dtype=torch.float16, device_map="auto", quantization_config=quantization_config ) tokenizer = AutoTokenizer.from_pretrained("RedHatAI/Sparse-Llama-3.1-8B-2of4") input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt").to(model.device) # auto-compile the quantized model with `cache_implementation="static"` to get speed up output = quantized_model.generate(**input_ids, max_new_tokens=10, cache_implementation="static") print(tokenizer.decode(output[0], skip_special_tokens=True))
</hfoption> </hfoptions>

Intel XPU

int8-dynamic-and-weight-only
int4-weight-only
import torch from transformers import TorchAoConfig, AutoModelForCausalLM, AutoTokenizer from torchao.quantization import Int8DynamicActivationInt8WeightConfig, Int8WeightOnlyConfig quant_config = Int8DynamicActivationInt8WeightConfig() # or int8 weight only quantization # quant_config = Int8WeightOnlyConfig() quantization_config = TorchAoConfig(quant_type=quant_config) # Load and quantize the model quantized_model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-3.1-8B-Instruct", dtype="auto", device_map="auto", quantization_config=quantization_config ) tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct") input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt").to(model.device) # auto-compile the quantized model with `cache_implementation="static"` to get speed up output = quantized_model.generate(**input_ids, max_new_tokens=10, cache_implementation="static") print(tokenizer.decode(output[0], skip_special_tokens=True))

CPU

int8-dynamic-and-weight-only
int4-weight-only
import torch from transformers import TorchAoConfig, AutoModelForCausalLM, AutoTokenizer from torchao.quantization import Int8DynamicActivationInt8WeightConfig, Int8WeightOnlyConfig quant_config = Int8DynamicActivationInt8WeightConfig() # quant_config = Int8WeightOnlyConfig() quantization_config = TorchAoConfig(quant_type=quant_config) # Load and quantize the model quantized_model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-3.1-8B-Instruct", dtype="auto", device_map="cpu", quantization_config=quantization_config ) tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct") input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt") # auto-compile the quantized model with `cache_implementation="static"` to get speed up output = quantized_model.generate(**input_ids, max_new_tokens=10, cache_implementation="static") print(tokenizer.decode(output[0], skip_special_tokens=True))

Per Module Quantization

1. Skip quantization for certain layers

With FqnToConfig we can specify a default configuration for all layers while skipping quantization for certain layers.

import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig model_id = "meta-llama/Llama-3.1-8B-Instruct" from torchao.quantization import Int4WeightOnlyConfig, FqnToConfig config = Int4WeightOnlyConfig(group_size=128) # set default to int4 (for linears), and skip quantizing `model.layers.0.self_attn.q_proj` quant_config = FqnToConfig({"_default": config, "model.layers.0.self_attn.q_proj": None}) quantization_config = TorchAoConfig(quant_type=quant_config) quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", dtype=torch.bfloat16, quantization_config=quantization_config) # lm_head is not quantized and model.layers.0.self_attn.q_proj is not quantized print("quantized model:", quantized_model) tokenizer = AutoTokenizer.from_pretrained(model_id) # Manual Testing prompt = "Hey, are you conscious? Can you talk to me?" inputs = tokenizer(prompt, return_tensors="pt").to(quantized_model.device.type) generated_ids = quantized_model.generate(**inputs, max_new_tokens=128) output_text = tokenizer.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text)

2. Quantizing different layers with different quantization configs (no regex)

import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig model_id = "facebook/opt-125m" from torchao.quantization import Int4WeightOnlyConfig, FqnToConfig, Int8DynamicActivationInt4WeightConfig, IntxWeightOnlyConfig, PerAxis, MappingType weight_dtype = torch.int8 granularity = PerAxis(0) mapping_type = MappingType.ASYMMETRIC embedding_config = IntxWeightOnlyConfig( weight_dtype=weight_dtype, granularity=granularity, mapping_type=mapping_type, ) linear_config = Int8DynamicActivationInt4WeightConfig(group_size=128) quant_config = FqnToConfig({"_default": linear_config, "model.decoder.embed_tokens": embedding_config, "model.decoder.embed_positions": None}) # set `include_embedding` to True in order to include embedding in quantization # when `include_embedding` is True, we'll remove input embedding from `modules_not_to_convert` as well quantization_config = TorchAoConfig(quant_type=quant_config, include_embedding=True) quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cpu", dtype=torch.bfloat16, quantization_config=quantization_config) print("quantized model:", quantized_model) # make sure embedding is quantized print("embed_tokens weight:", quantized_model.model.decoder.embed_tokens.weight) tokenizer = AutoTokenizer.from_pretrained(model_id) # Manual Testing prompt = "Hey, are you conscious? Can you talk to me?" inputs = tokenizer(prompt, return_tensors="pt").to("cpu") generated_ids = quantized_model.generate(**inputs, max_new_tokens=128, cache_implementation="static") output_text = tokenizer.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(output_text)

3. Quantizing different layers with different quantization configs (with regex)

We can also use regex to specify the config for all modules that has module_fqn that matches the regex, all regex should start with re:, for example re:layers\..*\.gate_proj will match all layers like layers.0.gate_proj. See here for docs.

import logging import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig # Configure logging to see warnings and debug information logging.basicConfig( level=logging.INFO, format="%(name)s - %(levelname)s - %(message)s" ) # Enable specific loggers that might contain the serialization warnings logging.getLogger("transformers").setLevel(logging.INFO) logging.getLogger("torchao").setLevel(logging.INFO) logging.getLogger("safetensors").setLevel(logging.INFO) logging.getLogger("huggingface_hub").setLevel(logging.INFO) model_id = "facebook/opt-125m" from torchao.quantization import ( Float8DynamicActivationFloat8WeightConfig, Int4WeightOnlyConfig, IntxWeightOnlyConfig, PerRow, PerAxis, FqnToConfig, Float8Tensor, Int4TilePackedTo4dTensor, IntxUnpackedToInt8Tensor, ) float8dyn = Float8DynamicActivationFloat8WeightConfig(granularity=PerRow()) int4wo = Int4WeightOnlyConfig(int4_packing_format="tile_packed_to_4d") intxwo = IntxWeightOnlyConfig(weight_dtype=torch.int8, granularity=PerAxis(0)) qconfig_dict = { # highest priority "model.decoder.layers.3.self_attn.q_proj": int4wo, "model.decoder.layers.3.self_attn.k_proj": int4wo, "model.decoder.layers.3.self_attn.v_proj": int4wo, # vllm "model.decoder.layers.3.self_attn.qkv_proj": int4wo, "re:model\.decoder\.layers\..+\.self_attn\.q_proj": float8dyn, "re:model\.decoder\.layers\..+\.self_attn\.k_proj": float8dyn, "re:model\.decoder\.layers\..+\.self_attn\.v_proj": float8dyn, # this should not take effect and we'll fallback to _default # since no full mach (missing `j` in the end) "re:model\.decoder\.layers\..+\.self_attn\.out_pro": float8dyn, # vllm "re:model\.decoder\.layers\..+\.self_attn\.qkv_proj": float8dyn, "_default": intxwo, } quant_config = FqnToConfig(qconfig_dict) quantization_config = TorchAoConfig(quant_type=quant_config) quantized_model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config, ) print("quantized model:", quantized_model) tokenizer = AutoTokenizer.from_pretrained(model_id) for i in range(12): if i == 3: assert isinstance(quantized_model.model.decoder.layers[i].self_attn.q_proj.weight, Int4TilePackedTo4dTensor) assert isinstance(quantized_model.model.decoder.layers[i].self_attn.k_proj.weight, Int4TilePackedTo4dTensor) assert isinstance(quantized_model.model.decoder.layers[i].self_attn.v_proj.weight, Int4TilePackedTo4dTensor) else: assert isinstance(quantized_model.model.decoder.layers[i].self_attn.q_proj.weight, Float8Tensor) assert isinstance(quantized_model.model.decoder.layers[i].self_attn.k_proj.weight, Float8Tensor) assert isinstance(quantized_model.model.decoder.layers[i].self_attn.v_proj.weight, Float8Tensor) assert isinstance(quantized_model.model.decoder.layers[i].self_attn.out_proj.weight, IntxUnpackedToInt8Tensor) # Manual Testing prompt = "What are we having for dinner?" print("Prompt:", prompt) inputs = tokenizer( prompt, return_tensors="pt", ).to("cuda") # setting temperature to 0 to make sure result deterministic generated_ids = quantized_model.generate(**inputs, max_new_tokens=128, temperature=0) correct_output_text = tokenizer.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print("Response:", correct_output_text[0][len(prompt) :]) # Load model from saved checkpoint reloaded_model = AutoModelForCausalLM.from_pretrained( save_to, device_map="cuda:0", torch_dtype=torch.bfloat16, # quantization_config=quantization_config, ) generated_ids = reloaded_model.generate(**inputs, max_new_tokens=128, temperature=0) output_text = tokenizer.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print("Response:", output_text[0][len(prompt) :]) assert(correct_output_text == output_text)

Autoquant

If you want to automatically choose a quantization type for quantizable layers (nn.Linear) you can use the autoquant API.

The autoquant API automatically chooses a quantization type by micro-benchmarking on input type and shape and compiling a single linear layer.

Note: autoquant is for GPU only right now.

Create a TorchAoConfig and set to "autoquant". Set the cache_implementation to "static" to automatically torch.compile the forward method. Finally, call finalize_autoquant on the quantized model to finalize the quantization and log the input shapes.

import torch from transformers import TorchAoConfig, AutoModelForCausalLM, AutoTokenizer quantization_config = TorchAoConfig("autoquant", min_sqnr=None) quantized_model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-3.1-8B-Instruct", dtype="auto", device_map="auto", quantization_config=quantization_config ) tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct") input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt").to(quantized_model.device.type) # auto-compile the quantized model with `cache_implementation="static"` to get speed up output = quantized_model.generate(**input_ids, max_new_tokens=10, cache_implementation="static") # explicitly call `finalize_autoquant` (may be refactored and removed in the future) quantized_model.finalize_autoquant() print(tokenizer.decode(output[0], skip_special_tokens=True))

Serialization

Saving the quantized model with save_pretrained (in safetensors format) is only supported for torchao >= v0.15. For any version below, it is only possible to manually save as unsafe .bin checkpoints with torch.save.

save-locally
push-to-huggingface-hub
# torchao >= 0.15 output_dir = "llama3-8b-int4wo-128" quantized_model.save_pretrained("llama3-8b-int4wo-128")

Loading quantized models

Loading a quantized model depends on the quantization scheme. For quantization schemes, like int8 and float8, you can quantize the model on any device and also load it on any device. The example below demonstrates quantizing a model on the CPU and then loading it on CUDA or XPU.

import torch from transformers import TorchAoConfig, AutoModelForCausalLM, AutoTokenizer from torchao.quantization import Int8WeightOnlyConfig quant_config = Int8WeightOnlyConfig(group_size=128) quantization_config = TorchAoConfig(quant_type=quant_config) # Load and quantize the model quantized_model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-3.1-8B-Instruct", dtype="auto", device_map="cpu", quantization_config=quantization_config ) # save the quantized model output_dir = "llama-3.1-8b-torchao-int8" quantized_model.save_pretrained(output_dir) # reload the quantized model reloaded_model = AutoModelForCausalLM.from_pretrained( output_dir, device_map="auto", dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct") input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt").to(reloaded_model.device.type) output = reloaded_model.generate(**input_ids, max_new_tokens=10) print(tokenizer.decode(output[0], skip_special_tokens=True)) 

For int4, the model can only be loaded on the same device it was quantized on because the layout is specific to the device. The example below demonstrates quantizing and loading a model on the CPU.

import torch from transformers import TorchAoConfig, AutoModelForCausalLM, AutoTokenizer from torchao.quantization import Int4WeightOnlyConfig from torchao.dtypes import Int4CPULayout quant_config = Int4WeightOnlyConfig(group_size=128, layout=Int4CPULayout()) quantization_config = TorchAoConfig(quant_type=quant_config) # Load and quantize the model quantized_model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-3.1-8B-Instruct", dtype="auto", device_map="cpu", quantization_config=quantization_config ) # save the quantized model output_dir = "llama-3.1-8b-torchao-int4-cpu" quantized_model.save_pretrained(output_dir) # reload the quantized model reloaded_model = AutoModelForCausalLM.from_pretrained( output_dir, device_map="cpu", dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct") input_text = "What are we having for dinner?" input_ids = tokenizer(input_text, return_tensors="pt") output = reloaded_model.generate(**input_ids, max_new_tokens=10) print(tokenizer.decode(output[0], skip_special_tokens=True)) 

⚠️ Deprecation Notice

Starting with version 0.10.0, the string-based API for quantization configuration (e.g., TorchAoConfig("int4_weight_only", group_size=128)) is deprecated and will be removed in a future release.

Please use the new AOBaseConfig-based approach instead:

# Old way (deprecated) quantization_config = TorchAoConfig("int4_weight_only", group_size=128) # New way (recommended) from torchao.quantization import Int4WeightOnlyConfig quant_config = Int4WeightOnlyConfig(group_size=128) quantization_config = TorchAoConfig(quant_type=quant_config)

The new API offers greater flexibility, better type safety, and access to the full range of features available in torchao.

Migration Guide

Here’s how to migrate from common string identifiers to their AOBaseConfig equivalents:

Old String API New AOBaseConfig API
"int4_weight_only" Int4WeightOnlyConfig()
"int8_weight_only" Int8WeightOnlyConfig()
"int8_dynamic_activation_int8_weight" Int8DynamicActivationInt8WeightConfig()

All configuration objects accept parameters for customization (e.g., group_size, scheme, layout).

Resources

For a better sense of expected performance, view the benchmarks for various models with CUDA and XPU backends. You can also run the code below to benchmark a model yourself.

from torch._inductor.utils import do_bench_using_profiling from typing import Callable def benchmark_fn(func: Callable, *args, **kwargs) -> float: """Thin wrapper around do_bench_using_profiling""" no_args = lambda: func(*args, **kwargs) time = do_bench_using_profiling(no_args) return time * 1e3 MAX_NEW_TOKENS = 1000 print("int4wo-128 model:", benchmark_fn(quantized_model.generate, **input_ids, max_new_tokens=MAX_NEW_TOKENS, cache_implementation="static")) bf16_model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", dtype=torch.bfloat16) output = bf16_model.generate(**input_ids, max_new_tokens=10, cache_implementation="static") # auto-compile print("bf16 model:", benchmark_fn(bf16_model.generate, **input_ids, max_new_tokens=MAX_NEW_TOKENS, cache_implementation="static"))

For best performance, you can use recommended settings by calling torchao.quantization.utils.recommended_inductor_config_setter()

Refer to Other Available Quantization Techniques for more examples and documentation.

Issues

If you encounter any issues with the Transformers integration, please open an issue on the Transformers repository. For issues directly related to torchao, please open an issue on the torchao repository.

Update on GitHub