Skip to content
This repository was archived by the owner on Oct 25, 2024. It is now read-only.

Commit 10af3ca

Browse files
authored
remove hard print (#158)
Signed-off-by: Lv, Kaokao <kaokao.lv@intel.com>
1 parent b650d4b commit 10af3ca

File tree

1 file changed

+0
-4
lines changed

1 file changed

+0
-4
lines changed

workflows/chatbot/fine_tuning/instruction_tuning_pipeline/finetune_clm.py

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -582,8 +582,6 @@ def concatenate_data(dataset, max_seq_length):
582582
tokenized_datasets_, data_args.max_seq_length
583583
)
584584

585-
print(tokenized_datasets)
586-
587585
if training_args.do_eval:
588586
if "test" not in tokenized_datasets:
589587
logger.info('Splitting train dataset in train and validation according to `eval_dataset_size`')
@@ -594,8 +592,6 @@ def concatenate_data(dataset, max_seq_length):
594592
if data_args.max_eval_samples is not None:
595593
eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))
596594

597-
print(eval_dataset[0])
598-
599595
if training_args.do_train:
600596
if "train" not in tokenized_datasets:
601597
raise ValueError("--do_train requires a train dataset")

0 commit comments

Comments
 (0)