TLT ResNet18 classifier don't get proper predictions in DS

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) 1080 Ti
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.0.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 450.80.02
• Issue Type( questions, new requirements, bugs) bugs/question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello, I’m train the model using TLT - ResNet18 simple classifier. It works perfect with tlt-infer or with clear TensorRT app. But when I try to use it inside DS pipeline as secondary model it doesn’t work.

I’ve use the following config and preprocessing parameters as described there.

[property] gpu-id=0 net-scale-factor=1.0 offsets=123.67;116.28;103.53 model-color-format=1 infer-dims=3;224;224 uff-input-order=0 uff-input-blob-name=input_1 batch-size=1 model-engine-file=resnet18_version2_classifier_bs_1_res_224_fp32.engine labelfile-path=labels.txt #force-implicit-batch-dim=1 output-blob-names=predictions/Softmax ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 process-mode=2 network-type=1 #classifier-async-mode=0 classifier-threshold=0.0 #input-object-min-width=64 #input-object-min-height=64 operate-on-gie-id=1 operate-on-class-ids=0;1; gie-unique-id=4 output-tensor-meta=1 

I can’t get proper predictions inside deepstream app. I get rare predictions and class_id=0. I add tensor-meta to see predictions, and predictions/Softmax layer even not listed in probe.

I try default sgie1 model config_infer_secondary_vehicletypes.txt and get all detections and see predictions/Softmax in probe:

[property] gpu-id=0 net-scale-factor=1 model-file=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/resnet18.caffemodel proto-file=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/resnet18.prototxt #model-engine-file=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b1_gpu0_fp16.engine #int8-calib-file=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/cal_trt.bin #mean-file=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/mean.ppm labelfile-path=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypes/labels.txt force-implicit-batch-dim=1 batch-size=1 model-color-format=1 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=2 network-type=1 output-blob-names=predictions/Softmax classifier-async-mode=0 classifier-threshold=0.01 input-object-min-width=128 input-object-min-height=128 operate-on-gie-id=1 operate-on-class-ids=0;1; gie-unique-id=4 output-tensor-meta=1 is-classifier=1 #scaling-filter=0 #scaling-compute-hw=0 

I’ve tried the same configurations for my model and don’t get any result also:

[property] gpu-id=0 net-scale-factor=1 model-color-format=0 infer-dims=3;224;224 uff-input-order=0 uff-input-blob-name=input_1 batch-size=1 #onnx-file=/home/rostislav/trt_converter/res18_simple.onnx #model-engine-file=/home/rostislav/onnx_trt_converter/export/resnet18_bs-1_res-(224, 224).engine #model-engine-file=/home/rostislav/trt_converter/resnet18_engine_classifier.buf model-engine-file=/home/rostislav/tlt_data/resnet18_version2_classifier_bs_1_res_224_fp32.engine labelfile-path=labels_grocery.txt force-implicit-batch-dim=1 output-blob-names=predictions/Softmax ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=0 process-mode=2 network-type=1 #classifier-async-mode=0 classifier-threshold=0.0 #input-object-min-width=64 #input-object-min-height=64 operate-on-gie-id=1 operate-on-class-ids=0;1; gie-unique-id=4 output-tensor-meta=1 

Could somebody explain me how to correctly setup sgie classification model for deepstream pipeline and which preprocessing parameters should I use for TLT model inside DS?

Refer to
/opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models/deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt
and
/opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models/config_infer_secondary_vehicletypenet.txt

Hello @Morganh,

Thanks, I’ve tried such configurations. And found out, that with

process-mode=1

I’ve got strange detection and class_id=0 predictions from classifier all the time (probably due to using whole image).

When I’ve set process-mode=2 I’ve got nothing. Why?

I’m using python wrapper, and my pipeline is pgie → tracker → sgie

Please try to run with above mentioned config file and default models.
/opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models/deepstream_app_source1_dashcamnet_vehiclemakenet_vehicletypenet.txt
and
/opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models/config_infer_secondary_vehicletypenet.txt

Then compare the models/configs with yours.

Thank you, I’ve found the problem.

My model return only 1 class_id, because labels file for classifier should be with “;” delimiters in one line despite labels for detectors, where delimiters are “\n” new line, e.g. each class in one line. It’s not obvious, and there’s no any information in doc. Even example in TLT documentation points out, that labels for classifiers should be in new lines.

Also for enable output-tensor-meta=1 I needed to set it up in config upper model-engine-file line. I don’t know why, but it works for me.

1 Like