[Question] Deepstream 7.1 Saving sample frames using nvsdanalytics in S3 with Kafka plugin

Please provide complete information as applicable to your setup.

Subject: Best practice: save full frame to S3 on ROI-exit events in DeepStream (how to get frame bytes safely and send Kafka message)

I’m building a DeepStream (7.1) multi-stream app. Goal: detect + track people, analyze their locations vs ROI, and when a person is not inside an ROI (trigger event) I need to:

  1. Save the full frame (the frame bytes as JPEG) to S3,

  2. Produce a Kafka message with metadata: { cam_id, frame_id, track_id, bbox, class, s3_url, ts }.

My pipeline looks like this:

streammux → pgie → nvtracker → tee
├─ Branch A (restream/UI): nvstreamdemux → nvosd → restream (unchanged)
└─ Branch B (analytics): queue → nvdsanalytics → pad-probe → enqueue task

I can read nvdsanalytics results in the pad-probe (ROI status, class, etc.). The question is how to correctly and efficiently get the exact frame bytes and save them to S3, then publish the Kafka message including the S3 URL — without blocking the GStreamer/DeepStream main loop.

Here is my current understanding:

  1. pad-probe after nvdsanalytics is fast: it only inspects metadata, creates a compact task dict and does queue.put_nowait(task).
  2. A separate worker (thread/process) pops the task and requests a GPU one-shot encode of the needed frame/crop and receives encoded JPEG bytes → upload to S3 → send Kafka message (producer is external, e.g. confluent_kafka.Producer).

May be there are some examples? I checked a deepstream-image-meta-test, with encoding image example, but I’m confused to use it in python app. Please point me in the right direction

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Please provide complete information as applicable to your setup.

• DeepStream Version - 7.1
• JetPack Version (valid for Jetson only)
• TensorRT Version - 10.3.0.26
• NVIDIA GPU Driver Version (valid for GPU only) - 565.57.01

• Issue Type( questions, new requirements, bugs) - question about best practices
• How to reproduce the issue ? I wrote my thoughts

Subject: Best practice: save full frame to S3 on ROI-exit events in DeepStream (how to get frame bytes safely and send Kafka message)

I’m building a DeepStream (7.1) multi-stream app. Goal: detect + track people, analyze their locations vs ROI, and when a person is not inside an ROI (trigger event) I need to:

  1. Save the full frame (the frame bytes as JPEG) to S3,

  2. Produce a Kafka message with metadata: { cam_id, frame_id, track_id, bbox, class, s3_url, ts }.

My pipeline looks like this:

streammux → pgie → nvtracker → tee
─ Branch A (restream/UI): nvstreamdemux → nvosd → restream (unchanged)
└─ Branch B (analytics): queue → nvdsanalytics → pad-probe → enqueue task

I can read nvdsanalytics results in the pad-probe (ROI status, class, etc.). The question is how to correctly and efficiently get the exact frame bytes and save them to S3, then publish the Kafka message including the S3 URL — without blocking the GStreamer/DeepStream main loop.

Here is my current understanding:

  1. pad-probe after nvdsanalytics is fast: it only inspects metadata, creates a compact task dict and does queue.put_nowait(task).
  2. A separate worker (thread/process) pops the task and requests a GPU one-shot encode of the needed frame/crop and receives encoded JPEG bytes → upload to S3 → send Kafka message (producer is external, e.g. confluent_kafka.Producer).

May be there are some examples? I checked a deepstream-image-meta-test, with encoding image example, but I’m confused to use it in python app. Please point me in the right direction

Yes, please refer to deepstream-image-meta-test for how to use hardware acceleration API to encode. The workflow is: 1. call nvds_obj_enc_create_context once. 2. call nvds_obj_enc_process to encode, then call nvds_obj_enc_finish to wait. 4. If no need to encode, call nvds_obj_enc_destroy_context. Python code also support this API. Here is usage sample.

Thx for ur response, a couple of clarifying questions:

  1. Is it correct that nvds_obj_enc_process() may be called from a pad probe, but nvds_obj_enc_finish() must run in a separate worker thread/process to avoid blocking the GStreamer main loop?
  2. If I run multiple encoder workers, should each worker call nvds_obj_enc_create_context(gpu_id) (one ctx per worker), or is it safe to share a single ctx across workers — and if sharing is allowed, how must process() / finish() be synchronized?
  3. Is using hash(gst_buffer) as the surface identifier in Python bindings the recommended approach, and is doing gst_buffer_ref() in the probe and gst_buffer_unref() in the worker sufficient to guarantee the buffer/surface remains valid until encoding finishes?
  4. I plan to write an S3 key into obj_meta and publish that meta via nvsbroker while a worker uploads the JPEG asynchronously. How do you recommend avoiding consumers seeing the S3 URL before the object is uploaded? (options: upload → then publish, publish placeholder + ready update, other best practice?)
  1. Yes. the time conusmpiton of the API is minimal because of hardware accelertion. you can also call nvds_obj_enc_finish in a separate thread.
  2. No, nvds_obj_enc_create_context only needs to be called once, you may call nvds_obj_enc_process in multiple threads because there is lock in the low-level lib. finish() will wait for all process() tasks to finish.
  3. hash() just converts a pointer to int. No, nvds_obj_enc_process did not do gst_buffer_ref.
  4. regarding “nvsbroker”, do you mean nvmsgbroker? regarding “avoiding consumers seeing the S3 URL before the object is uploaded”, you may need to publish after the JPEG has been uploaded successfully.

ok, thx for answers, for now the questions are over

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.