• Hardware Platform (Jetson / GPU) tesla T4 • DeepStream Version 6.1 • JetPack Version (valid for Jetson only) • TensorRT Version 7.1 • NVIDIA GPU Driver Version (valid for GPU only) tesla T4
im using the docker nvcr.io/nvidia/tritonserver:22.01-py3
i do not see model loaded , could you let me in whcih folder it has to placed ------------------------------------------------±-------+
I0218 01:50:30.783690 1 server.cc:589] ±------±--------±-------+ | Model | Version | Status | ±------±--------±-------+ ±------±--------±-------+
I0218 01:50:30.783794 1 tritonserver.cc:1865] ±---------------------------------±--------
its working now .
Could some one tell me which python file from the client has to be referred for Video Inferencing . ???
root@ip-172-31-11-102:/workspace/install/python# ls ensemble_image_client.py simple_grpc_custom_repeat.py simple_http_health_metadata.py grpc_client.py simple_grpc_health_metadata.py simple_http_infer_client.py grpc_explicit_byte_content_client.py simple_grpc_infer_client.py simple_http_model_control.py grpc_explicit_int8_content_client.py simple_grpc_model_control.py simple_http_sequence_sync_infer_client.py grpc_explicit_int_content_client.py simple_grpc_sequence_stream_infer_client.py simple_http_shm_client.py grpc_image_client.py simple_grpc_sequence_sync_infer_client.py simple_http_shm_string_client.py image_client.py simple_grpc_shm_client.py simple_http_string_infer_client.py memory_growth_test.py simple_grpc_shm_string_client.py tritonclient-2.18.0-py3-none-any.whl reuse_infer_objects_client.py simple_grpc_string_infer_client.py tritonclient-2.18.0-py3-none-manylinux1_x86_64.whl simple_grpc_async_infer_client.py simple_http_async_infer_client.py simple_grpc_cudashm_client.py simple_http_cudashm_client.py
could someone tell , how ‘model.plan’ has to be configured . i have a ‘TLT model’ , And i have to create model Repository structure .
i get the below error
I0219 17:15:07.642644 1 server.cc:589] ±-----------------±-----+ | Repository Agent | Path | ±-----------------±-----+ ±-----------------±-----+
I0219 17:15:07.642597 1 server.cc:546] ±------------±------------------------------------------------------------------------±-------+ | Backend | Path | Config | ±------------±------------------------------------------------------------------------±-------+ | pytorch | /opt/tritonserver/backends/pytorch/libtriton_pytorch.so | {} | | tensorflow | /opt/tritonserver/backends/tensorflow1/libtriton_tensorflow1.so | {} | | onnxruntime | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so | {} | | openvino | /opt/tritonserver/backends/openvino_2021_2/libtriton_openvino_2021_2.so | {} | ±------------±------------------------------------------------------------------------±-------+
I0219 17:15:07.642644 1 server.cc:589] ±--------------±--------±--------------------------------------------------------+ | Model | Version | Status | ±--------------±--------±--------------------------------------------------------+ | Helmet_model | 1 | UNAVAILABLE: Internal: unable to create TensorRT engine | | densenet_onnx | 1 | READY | ±--------------±--------±--------------------------------------------------------+
I0219 17:15:07.642756 1 tritonserver.cc:1865] ±---------------------------------±---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Option | Value | ±---------------------------------±---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | server_id | triton | | server_version | 2.18.0 | | server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics | | model_repository_path[0] | /models | | model_control_mode | MODE_NONE | | strict_model_config | 1 | | rate_limit | OFF | | pinned_memory_pool_byte_size | 268435456 | | cuda_memory_pool_byte_size{0} | 67108864 | | response_cache_byte_size | 0 | | min_supported_compute_capability | 6.0 | | strict_readiness | 1 | | exit_timeout | 30 | ±---------------------------------±-----------------------------------------------------------------------------------------------------------------------------------
mchi February 23, 2022, 1:06am 6 Hi @h9945394143 Sorry for delay! Will check and reply ASAP.
Morganh February 23, 2022, 1:08am 7 Moving this topic to TAO forum.
Morganh February 23, 2022, 3:05am 10
system Closed March 15, 2022, 3:43am 13 This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.