No Bounding Boxes with trtexec-Generated FP32/INT8 Engine in DeepStream (YOLOv5)

Hi,

I’m integrating a YOLOv5 model with DeepStream and facing an issue when using an engine file generated from an ONNX model via trtexec.

Below is the relevant configuration used with the nvinfer plugin:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=best.onnx
model-engine-file=model_int8_384_640_calib.engine
labelfile-path=labels.txt
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=./nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

The custom parser is based on the implementation from this repository:
👉 GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 7.1 / 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models

When I generate the engine file using DeepStream directly from .cfg and .wts, detections and bounding boxes appear correctly. However, in that method, I’m unable to generate the calib.table, so I switched to using trtexec for INT8 calibration and engine creation.

Now, with the trtexec-generated engine (from ONNX), the DeepStream pipeline runs, but no detections or bounding boxes are shown.

I would appreciate any help in identifying what might be going wrong or what steps are missing in this workflow.

@sangeeta.charantimath it looks like you are using TensorRT, not “TensorRT for RTX”, which is a new SDK. The correct forum to ask TensorRT questions would be: TensorRT - NVIDIA Developer Forums

Thanks for the info! Yes, I’m using TensorRT (not TensorRT for RTX). I’ll post my questions on the NVIDIA TensorRT forum from now on.