Skip to content

Commit f0b2125

Browse files
shimizustByronHsu
andauthored
Update src/transformers/training_args.py
Co-authored-by: Byron Hsu <byronhsu1230@gmail.com>
1 parent 29b13a9 commit f0b2125

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/transformers/training_args.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -793,7 +793,7 @@ class TrainingArguments:
793793
Whether to run recursively gather object in a nested list/tuple/dictionary of objects from all devices. This should only be enabled if users are not just returning tensors, and this is actively discouraged by PyTorch.
794794
795795
use_liger (`bool`, *optional*, defaults to `False`):
796-
Whether enable [Liger](https://github.com/linkedin/Liger-Kernel) (Linkedin GPU Efficient Runtime) Kernel for LLM model training.
796+
Whether enable [Liger](https://github.com/linkedin/Liger-Kernel) Kernel for LLM model training.
797797
It can effectively increase multi-GPU training throughput by ~20% and reduces memory usage by ~60%, works out of the box with
798798
flash attention, PyTorch FSDP, and Microsoft DeepSpeed. Currently, it supports llama, mistral, mixtral and gemma models.
799799
"""

0 commit comments

Comments
 (0)