Skip to content
This repository was archived by the owner on Oct 25, 2024. It is now read-only.

Commit ffa8f3c

Browse files
added finetuning example for gemma-2b on ARC. (#1328)
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
1 parent ae91bf8 commit ffa8f3c

File tree

2 files changed

+31
-4
lines changed
  • intel_extension_for_transformers

2 files changed

+31
-4
lines changed

intel_extension_for_transformers/llm/finetuning/finetuning.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -61,6 +61,7 @@
6161
is_deepspeed_available,
6262
)
6363
from intel_extension_for_transformers.utils.device_utils import is_hpu_available
64+
from intel_extension_for_transformers.neural_chat.utils.common import get_device_type
6465

6566

6667
if is_bitsandbytes_available():
@@ -76,10 +77,7 @@ def __init__(self, finetuning_config: BaseFinetuningConfig):
7677
finetuning_config.finetune_args
7778
)
7879
if finetuning_config.finetune_args.device == "auto":
79-
if torch.cuda.is_available():
80-
finetuning_config.finetune_args.device = "cuda"
81-
else:
82-
finetuning_config.finetune_args.device = "cpu"
80+
finetuning_config.finetune_args.device = get_device_type()
8381
if finetuning_config.finetune_args.device == "cpu":
8482
Arguments = type(finetuning_config.training_args)
8583
training_args = {

intel_extension_for_transformers/neural_chat/examples/finetuning/instruction/README.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -625,6 +625,35 @@ For finetuning on SPR, add `--bf16` argument will speedup the finetuning process
625625
You could also indicate `--peft` to switch peft method in P-tuning, Prefix tuning, Prompt tuning, LLama Adapter, LoRA,
626626
see https://github.com/huggingface/peft. Note for MPT, only LoRA is supported.
627627

628+
## Fine-tuning on Intel Arc GPUs
629+
630+
### 1. Single Card Fine-tuning
631+
632+
Follow the installation guidance in [intel-extension-for-pytorch](https://github.com/intel/intel-extension-for-pytorch) to install intel-extension-for-pytorch for GPU.
633+
634+
For `google/gemma-2b`, use the below command line for finetuning on the Alpaca dataset.
635+
636+
```bash
637+
python finetune_clm.py \
638+
--model_name_or_path "google/gemma-2b" \
639+
--train_file "/path/to/alpaca_data.json" \
640+
--dataset_concatenation \
641+
--per_device_train_batch_size 2 \
642+
--per_device_eval_batch_size 2 \
643+
--gradient_accumulation_steps 4 \
644+
--evaluation_strategy "no" \
645+
--save_strategy "steps" \
646+
--save_steps 2000 \
647+
--save_total_limit 1 \
648+
--learning_rate 1e-4 \
649+
--do_train \
650+
--num_train_epochs 3 \
651+
--overwrite_output_dir \
652+
--log_level info \
653+
--output_dir ./gemma-2b_peft_finetuned_model \
654+
--peft lora \
655+
--gradient_checkpointing True
656+
```
628657

629658
# Evaluation Metrics
630659

0 commit comments

Comments
 (0)