-
-
Couldn't load subscription status.
- Fork 10.8k
Description
Your current environment
The output of `python collect_env.py`
PyTorch version: 2.6.0.dev20241122+rocm6.2 Is debug build: False CUDA used to build PyTorch: N/A ROCM used to build PyTorch: 6.2.41133-dd7f95766OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 18.0.0git (https://github.com/RadeonOpenCompute/llvm-project roc-6.2.0 24292 26466ce804ac523b398608f17388eb6d605a3f09)
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-28-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI300X (gfx942:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.2.41133
MIOpen runtime version: 3.2.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 17
Model name: AMD EPYC 9474F 48-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1500.000
CPU max MHz: 4113.2808
CPU min MHz: 1500.0000
BogoMIPS: 7189.53
Virtualization: AMD-V
L1d cache: 3 MiB
L1i cache: 3 MiB
L2 cache: 96 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Versions of relevant libraries:
[pip3] mypy==1.8.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.9.1
[pip3] pynvml==11.5.3
[pip3] pytorch-triton-rocm==3.1.0+cf34004b8a
[pip3] pyzmq==26.2.0
[pip3] torch==2.6.0.dev20241122+rocm6.2
[pip3] torchvision==0.20.0.dev20241206+rocm6.2
[pip3] transformers==4.46.0
[pip3] triton==3.0.0
[conda] No relevant packages
ROCM Version: 6.2.41133-dd7f95766
Neuron SDK Version: N/A
vLLM Version: 0.6.6.dev82+g720b10fd
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
============================ ROCm System Management Interface ============================
================================ Weight between two GPUs =================================
GPU0
GPU0 0
================================= Hops between two GPUs ==================================
GPU0
GPU0 0
=============================== Link Type between two GPUs ===============================
GPU0
GPU0 0
======================================= Numa Nodes =======================================
GPU[0] : (Topology) Numa Node: 0
GPU[0] : (Topology) Numa Affinity: 0
================================== End of ROCm SMI Log ===================================
PYTORCH_TUNABLEOP_TUNING=0
PYTORCH_TESTING_DEVICE_ONLY_FOR=cuda
PYTORCH_TUNABLEOP_ENABLED=0
PYTORCH_TEST_WITH_ROCM=1
PYTORCH_ROCM_ARCH=gfx90a;gfx942
MAX_JOBS=32
LD_LIBRARY_PATH=/opt/conda/envs/py_3.9/lib/python3.9/site-packages/cv2/../../lib64:/opt/ompi/lib:/opt/rocm/lib:/usr/local/lib::/opt/rocm/lib/:/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib:
VLLM_USE_TRITON_FLASH_ATTN=0
PYTORCH_TUNABLEOP_FILENAME=/app/tuned_gemm_csv/afo_tune_device_%d_full.csv
VLLM_WORKER_MULTIPROC_METHOD=spawn
CUDA_MODULE_LOADING=LAZY
Model Input Dumps
No response
🐛 Describe the bug
Running vllm serve mistralai/mistral-123B-instruct --host 0.0.0.0 --port 8000 --enable-chunked-prefill False --max-seq-len-to-capture 16384 --num-scheduler-steps 10 fails with a raised exception after the model is loaded
Model llm-compressor recipe:
DEFAULT_stage: DEFAULT_modifiers: QuantizationModifier: ignore: [lm_head] targets: [Linear] scheme: FP8_DYNAMIC Traceback:
ERROR 12-27 00:09:08 engine.py:366] 'QKVParallelLinear' object has no attribute 'input_scale' ERROR 12-27 00:09:08 engine.py:366] Traceback (most recent call last): ERROR 12-27 00:09:08 engine.py:366] File "/workspace/vllm/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine ERROR 12-27 00:09:08 engine.py:366] engine = MQLLMEngine.from_engine_args(engine_args=engine_args, ERROR 12-27 00:09:08 engine.py:366] File "/workspace/vllm/vllm/engine/multiprocessing/engine.py", line 119, in from_engine_args ERROR 12-27 00:09:08 engine.py:366] return cls(ipc_path=ipc_path, ERROR 12-27 00:09:08 engine.py:366] File "/workspace/vllm/vllm/engine/multiprocessing/engine.py", line 71, in __init__ ERROR 12-27 00:09:08 engine.py:366] self.engine = LLMEngine(*args, **kwargs) ERROR 12-27 00:09:08 engine.py:366] File "/workspace/vllm/vllm/engine/llm_engine.py", line 273, in __init__ ERROR 12-27 00:09:08 engine.py:366] self.model_executor = executor_class(vllm_config=vllm_config, ) ERROR 12-27 00:09:08 engine.py:366] File "/workspace/vllm/vllm/executor/executor_base.py", line 36, in __init__ ERROR 12-27 00:09:08 engine.py:366] self._init_executor() ERROR 12-27 00:09:08 engine.py:366] File "/workspace/vllm/vllm/executor/gpu_executor.py", line 35, in _init_executor ERROR 12-27 00:09:08 engine.py:366] self.driver_worker.load_model() ERROR 12-27 00:09:08 engine.py:366] File "/workspace/vllm/vllm/worker/worker.py", line 155, in load_model ERROR 12-27 00:09:08 engine.py:366] self.model_runner.load_model() ERROR 12-27 00:09:08 engine.py:366] File "/workspace/vllm/vllm/worker/multi_step_model_runner.py", line 649, in load_model ERROR 12-27 00:09:08 engine.py:366] self._base_model_runner.load_model() ERROR 12-27 00:09:08 engine.py:366] File "/workspace/vllm/vllm/worker/model_runner.py", line 1096, in load_model ERROR 12-27 00:09:08 engine.py:366] self.model = get_model(vllm_config=self.vllm_config) ERROR 12-27 00:09:08 engine.py:366] File "/workspace/vllm/vllm/model_executor/model_loader/__init__.py", line 12, in get_model ERROR 12-27 00:09:08 engine.py:366] return loader.load_model(vllm_config=vllm_config) ERROR 12-27 00:09:08 engine.py:366] File "/workspace/vllm/vllm/model_executor/model_loader/loader.py", line 386, in load_model ERROR 12-27 00:09:08 engine.py:366] quant_method.process_weights_after_loading(module) ERROR 12-27 00:09:08 engine.py:366] File "/workspace/vllm/vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors.py", line 475, in process_weights_after_loading ERROR 12-27 00:09:08 engine.py:366] layer.scheme.process_weights_after_loading(layer) ERROR 12-27 00:09:08 engine.py:366] File "/workspace/vllm/vllm/model_executor/layers/quantization/compressed_tensors/schemes/compressed_tensors_w8a8_fp8.py", line 64, in process_weights_after_loading ERROR 12-27 00:09:08 engine.py:366] input_scale=layer.input_scale) ERROR 12-27 00:09:08 engine.py:366] File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1935, in __getattr__ ERROR 12-27 00:09:08 engine.py:366] raise AttributeError( ERROR 12-27 00:09:08 engine.py:366] AttributeError: 'QKVParallelLinear' object has no attribute 'input_scale' Compressed Tensors version:
Name: compressed-tensors Version: 0.8.1 Summary: Library for utilization of compressed safetensors of neural network models Home-page: https://github.com/neuralmagic/compressed-tensors Author: Neuralmagic, Inc. Author-email: support@neuralmagic.com License: Apache 2.0 Location: /opt/conda/envs/py_3.9/lib/python3.9/site-packages Requires: pydantic, torch, transformers Required-by: llmcompressor, vllm Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.