Skip to content

Conversation

@davidxia
Copy link
Contributor

@davidxia davidxia commented May 13, 2025

We can't run the script with --help to see the help message without connecting to a running vLLM server. This change allows us to do that.

We also improve the error message when the script cannot get a list of models
from vLLM.

RuntimeError: Failed to get the list of models from the vLLM server at http://localhost:8000/v1 with API key EMPTY. Check 1. the server is running 2. the server URL is correct 3. the API key is correct 
before
$ python examples/online_serving/openai_chat_completion_client_for_multimodal.py INFO 05-13 16:12:29 [__init__.py:248] Automatically detected platform cpu. Traceback (most recent call last): File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions yield File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 236, in handle_request resp = self._pool.handle_request(req) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request raise exc from None File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request response = connection.handle_request( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 99, in handle_request raise exc File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 76, in handle_request stream = self._connect(request) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 122, in _connect stream = self._network_backend.connect_tcp(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp with map_exceptions(exc_map): File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__ self.gen.throw(value) File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.ConnectError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 989, in _request response = self._client.send( ^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_client.py", line 926, in send response = self._send_handling_auth( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_client.py", line 954, in _send_handling_auth response = self._send_handling_redirects( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_client.py", line 991, in _send_handling_redirects response = self._send_single_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1027, in _send_single_request response = transport.handle_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 235, in handle_request with map_httpcore_exceptions(): File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__ self.gen.throw(value) File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.ConnectError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/dxia/src/github.com/vllm-project/vllm-upstream/examples/online_serving/openai_chat_completion_client_for_multimodal.py", line 34, in <module> models = client.models.list() ^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/resources/models.py", line 91, in list return self._get_api_list( ^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1325, in get_api_list return self._request_api_list(model, page, opts) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1176, in _request_api_list return self.request(page, options, stream=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 949, in request return self._request( ^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1013, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1091, in _retry_request return self._request( ^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1013, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1091, in _retry_request return self._request( ^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1023, in _request raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.
after
$ python examples/online_serving/openai_chat_completion_client_for_multimodal.py INFO 05-13 16:09:22 [__init__.py:248] Automatically detected platform cpu. Traceback (most recent call last): File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions yield File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 236, in handle_request resp = self._pool.handle_request(req) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request raise exc from None File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request response = connection.handle_request( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 99, in handle_request raise exc File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 76, in handle_request stream = self._connect(request) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 122, in _connect stream = self._network_backend.connect_tcp(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp with map_exceptions(exc_map): File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__ self.gen.throw(value) File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.ConnectError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 989, in _request response = self._client.send( ^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_client.py", line 926, in send response = self._send_handling_auth( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_client.py", line 954, in _send_handling_auth response = self._send_handling_redirects( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_client.py", line 991, in _send_handling_redirects response = self._send_single_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1027, in _send_single_request response = transport.handle_request(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 235, in handle_request with map_httpcore_exceptions(): File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__ self.gen.throw(value) File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.ConnectError: [Errno 111] Connection refused The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/dxia/src/github.com/vllm-project/vllm-upstream/examples/online_serving/openai_chat_completion_client_for_multimodal.py", line 314, in get_model models: SyncPage[Model] = client.models.list() ^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/resources/models.py", line 91, in list return self._get_api_list( ^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1325, in get_api_list return self._request_api_list(model, page, opts) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1176, in _request_api_list return self.request(page, options, stream=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 949, in request return self._request( ^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1013, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1091, in _retry_request return self._request( ^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1013, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1091, in _retry_request return self._request( ^^^^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1023, in _request raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/dxia/src/github.com/vllm-project/vllm-upstream/examples/online_serving/openai_chat_completion_client_for_multimodal.py", line 350, in <module> main(args) File "/home/dxia/src/github.com/vllm-project/vllm-upstream/examples/online_serving/openai_chat_completion_client_for_multimodal.py", line 344, in main model = get_model() ^^^^^^^^^^^ File "/home/dxia/src/github.com/vllm-project/vllm-upstream/examples/online_serving/openai_chat_completion_client_for_multimodal.py", line 316, in get_model raise RuntimeError( RuntimeError: Failed to get the list of models from the vLLM server at http://localhost:8000/v1 with API key EMPTY. Check 1. the server is running 2. the server URL is correct 3. the API key is correct
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the documentation Improvements or additions to documentation label May 13, 2025
@davidxia davidxia marked this pull request as ready for review May 13, 2025 16:47
Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a comment - PTAL!

Copy link
Contributor Author

@davidxia davidxia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move this to a separate util file

sgtm, I moved to model_utils.py. Lmk if that name works.

Comment on lines +15 to +16
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to add --trust-remote-code otherwise the model wasn't downloaded.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add cmd to run so user doesn't have to figure it out

@davidxia davidxia force-pushed the patch31 branch 2 times, most recently from e368de4 to d94df82 Compare May 14, 2025 03:01
@davidxia
Copy link
Contributor Author

Are the test failures related to my changes?

@DarkLight1337
Copy link
Member

Are the test failures related to my changes?

No, merging from main should fix the issue

We can't run the script with `--help` to see the help message without connecting to a running vLLM server. This change allows us to do that. We also improve the error message when the script cannot get a list of models from vLLM. Signed-off-by: David Xia <david@davidxia.com>
@ywang96 ywang96 added the ready ONLY add when PR is ready to merge/full CI is needed label May 16, 2025
@ywang96 ywang96 enabled auto-merge (squash) May 16, 2025 05:07
@ywang96 ywang96 merged commit 5c04bb8 into vllm-project:main May 16, 2025
49 checks passed
@davidxia davidxia deleted the patch31 branch May 16, 2025 06:32
zzzyq pushed a commit to zzzyq/vllm that referenced this pull request May 24, 2025
Signed-off-by: David Xia <david@davidxia.com> Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed

3 participants