Skip to content
This repository was archived by the owner on Oct 25, 2024. It is now read-only.

Conversation

@huiyan2021
Copy link
Contributor

Type of Change

Add support for Qwen-7B-Chat

Description

  1. Add support for Qwen-7B-Chat
  2. Modify deploy_chatbot_on_xpu.ipynb again seems the modification I made before has not been merged

Expected Behavior & Potential Risk

Qwen-7B-Chat can be run correctly

How has this PR been tested?

This PR can be tested in build_chatbot_on_spr.ipynb and build_chatbot_on_xpu.ipynb after environment prepared

from intel_extension_for_transformers.neural_chat import build_chatbot from intel_extension_for_transformers.neural_chat.config import PipelineConfig config = PipelineConfig(model_name_or_path="Qwen/Qwen-7B-Chat") chatbot = build_chatbot(config) response = chatbot.predict("千问是什么?") print(response) 

Dependency Change?

see requirements_cpu.txt and requirements_xpu.txt

@hshen14
Copy link
Contributor

hshen14 commented Oct 12, 2023

@huiyan2021 please add UT to improve the coverage of your newly-added qwen model code.

@huiyan2021
Copy link
Contributor Author

@huiyan2021 please add UT to improve the coverage of your newly-added qwen model code.

@lvliang-intel will help on this, thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

4 participants