Isaac Sim Version
5.0
Operating System
Ubuntu 22.04
Topic Description
Detailed Description
How would I use Isaac Sim to generate 2D keypoint data for pose estimation, as described here: Pose Estimation - Ultralytics YOLO Docs? My approach so far has been to crawl the stage for the pose of the prims the keypoints would be attached to and add tiny placeholder spheres, and record the bounding boxes for those. The problem with that approach is that if I make the placeholder spheres small enough to not be rendered then I don’t get a bounding box annotation for them. I’ve seen that there is a “skeleton_data” flag in replicator’s BasicWriter class (PYTHON API — Omni Replicator) but I can’t find documentation or examples anywhere of using it.
Additional Context
I am trying to generate data for a fisheye camera, which makes some solutions not work.
There is an earlier post on this forum asking basically the same thing but it has no responses and doesn’t provide as much context about what methods have been tried: 2d pose estimation using Custom Writer
This post seems to suggest that there is a method on the Camera class in omni.isaac.sensor (important: not the type returned by replicator’s camera creation function: PYTHON API — Omni Replicator)
Obtained prime position data and replicator bbox data not match the captured image
However according to this documentation it uses the pinhole camera model, which won’t work for a fisheye lens. I verified this myself so it looks like I’ll need a different method.
Seems like skeleton data annotation isn’t supported for fisheye lenses at the moment either.
2025-07-26T01:42:01Z [13,519ms] [Error] [omni.graph.core.plugin] /Render/PostProcess/SDGPipeline/Replicator_skeleton_data: [/Render/PostProcess/SDGPipeline] Assertion raised in compute - Projection type fisheyeOpenCV is not currently supported. File "/home/miller/IsaacSim/_build/linux-x86_64/release/extscache/omni.replicator.core-1.12.10+107.3.0.lx64.r.cp311/omni/replicator/core/ogn/python/_impl/nodes/OgnGetSkeletonData.py", line 741, in compute compute_2d_translations(joint_rest_global_translations, view_params, skel_data) File "/home/miller/IsaacLab/_isaac_sim/kit/kernel/py/carb/profiler/__init__.py", line 99, in wrapper r = f(*args, **kwds) ^^^^^^^^^^^^^^^^ File "/home/miller/IsaacSim/_build/linux-x86_64/release/extscache/omni.replicator.core-1.12.10+107.3.0.lx64.r.cp311/omni/replicator/core/ogn/python/_impl/nodes/OgnGetSkeletonData.py", line 232, in compute_2d_translations joint_pos2d = sd.helpers.world_to_image(points, None, view_params)[:, :2] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/miller/IsaacSim/_build/linux-x86_64/release/extscache/omni.syntheticdata-0.6.13+b0a86421.lx64.r.cp311/omni/syntheticdata/scripts/helpers.py", line 568, in world_to_image raise ValueError(f"Projection type {view_params['projection_type']} is not currently supported.")
I also tried just outputting images + poses for the camera and objects and using cv2.projectPoints to calculate the keypoint locations but it seems like there’s a mismatch between how Isaac and OpenCV calculate image point locations for the same camera extrinsics and intrinsics, even when using the “opencv” lens models in Isaac. I wrote up another issue on this with a minimal example to reproduce the problem I’m having with this approach: Isaac Sim "opencv_fisheye" lens distortion model is not consistent with OpenCV's projectPoints